Don't doubt what you are saying, but, So What! Are the machines in charge of what is acceptable human behavior? Are they the gatekeepers about what we are entitled to know about human history, human knowledge? What do the machines know about empathy, or why we like puppies and kittens. This willingness to cede our lives to the machines makes me ill.
Kinda-sorta. Someone had to write the initial learning code, but after that the algorithms literally write themselves based upon the data they are exposed to, and the algorithm itself cannot be broken apart to figure out what the hell they are doing. The only way to do that is to run them on a corpus of data and view the results.
Well you can choose to have them learn or you can let them learn and then treat them statically. Microsoft found this out with their Tay bot, which started spewing Nazi crap because people were feeding it trash from the net. So most do not choose to let their live projects learn. They do that in the background and control the inputs.
Naive AI is what it is. It'll never replicate human intelligence, but what it does is create the equivalent of an expert system without having to code the rules.
How familiar are you with today's learned AI techniques? An algorithm can be learned that I cannot explain or make human readable.
Don't doubt what you are saying, but, So What! Are the machines in charge of what is acceptable human behavior? Are they the gatekeepers about what we are entitled to know about human history, human knowledge? What do the machines know about empathy, or why we like puppies and kittens. This willingness to cede our lives to the machines makes me ill.
If you are saying that the algorithms are writing themselves then I am not aware of that. Is that what you are saying?
Kinda-sorta. Someone had to write the initial learning code, but after that the algorithms literally write themselves based upon the data they are exposed to, and the algorithm itself cannot be broken apart to figure out what the hell they are doing. The only way to do that is to run them on a corpus of data and view the results.
So the algorithms are supposed to be learning it seems. Unfortunately, I donтАЩt think that is what is happening.
Well you can choose to have them learn or you can let them learn and then treat them statically. Microsoft found this out with their Tay bot, which started spewing Nazi crap because people were feeding it trash from the net. So most do not choose to let their live projects learn. They do that in the background and control the inputs.
I remember the Tay bot takedown. That was hilarious. It ended up being a VERY racist botЁЯдг
Naive AI is what it is. It'll never replicate human intelligence, but what it does is create the equivalent of an expert system without having to code the rules.
I know you do work in this area, specifically in AI whose workings are humanly understandable. I was avoiding that topic for simplification.