Hi Peter,
I have a software engineering background, and I'm familiar with how Neural Network (NN) algorithms work (the basis for AI, or Machine Learning [ML]).
I agree that the field of AI / ML opens up new things that we have never had before. Some of these solutions or applications seem pretty amazing. Like any new computer / software technology, there are people who think it will do a lot of things that we will find out that it can't do. And there are people who are afraid of it for things that they shouldn't be afraid of.
In the end, AI is just another type of software application. Previously, as engineers, if we needed to solve a problem, we would have to work out how to solve the problem, or the algorithm, a priori. Meaning that we have to know how to solve the problem beforehand, and we just write the software algorithm that contains the solution. We know the inputs, we know the algorithm (or function), and it produces the output.
What NNs do is to allow us to not have to know how to solve a problem a priori. Let's say we don't know what the algorithm is for a solution. We know the inputs, and we know the outputs that we want. Neural Network algorithms can take that and, over time, adjust itself to create the necessary algorithm to connect the inputs to the outputs. It's just a very fancy pattern detector and matcher.
But the trick is all in the training of the Neural Network. That is the process of providing both positive matches (input = output) and negative matches (input does not equal output). We have a saying with all software: Garbage In, Garbage Out (GIGO). Any NN is only as good as the training that it has had.
This also means that any AI / ML / NN algorithm will just spit out what matches to its input. It can't independently determine, or verify, the truth of anything. If the input is garbage, the output will be garbage. And in these cases, the output will be based on the training data that it was given, plus whatever documents or files that it was given to analyze. This is why AI systems (hardware + software) are said to "hallucinate": they produce output that is not the truth because it is just matching a pattern. I don't like that as an industry term because it tends to anthropomorphize an inanimate object. AI systems cannot "develop consciousness" and they do not "think". But to many people, it looks like they emulate that. It's sad, but some people are easily fooled. Underneath the hood, it's just nothing more than some fancy math.
In the end, AI is just a tool, nothing more. It's usefulness is shown by the person who wields it, and who knows its strengths, limitations, and weaknesses.