Image: Pinterest |
Some people are worrying if AI is making things like the thesis. The fact is that AI can make things that seem very academic and qualified. But the text itself has no sense. Why computer cannot make that thing like humans?
If we want to make an AI that can find information from the Internet we must select some method that this system uses for selecting the texts that it connects to the document. Using linear programming methodology is almost impossible to create text that is suitable for anything.
In that model, the system selects a group of texts. And then it starts to connect them to one entirety. The system can put those texts in cerrtain order by putting their hashtags in alphabetical order.
And then the system can start its operation. The first paragraph is taken from the first text.
The second one is taken from the second text. And by using this method the system will make nice-looking ten-paragraph text.
The problem is that the system selects those texts that it connects by using a certain methodology. And one of those methods is that the system selects ten of the top-ranked pages from Google. Or the system can use only hashtags for sorting those pages.
Because the program itself will not understand what those pages involved, the result might be something else than we want. This is the limit of AI. The AI that understands the text is almost impossible to make.
We can make the AI that answers our questions and looks very reasonable. When we ask AI what it can do, it can give impressive answers. The AI can say that It's AI and know many things about things like astronomy. The thing that makes AI look very wise is that it has a database of the answers to the most common astronomical questions.
The system might involve details of the most common stars their spectral class etc. It can have information about the birth of the stars, and if somebody asks something about them it simply drives that data to the loudspeaker.
Those databases are like tapes. They involve information that is needed for the answers about astronomy.
But if somebody asks a question about history or politics, the AI can smply answer that those things belong to some other AI, and the expertise of that AI is astronomy. Then that tape can involve the mention that the AI should stop quite soon because there is limited time.
That thing seems very complicated. But it's a large group of sound files. When somebody asks the question. Where are words that are linking it with things like Jupiter moons computer will find the Jupiter database? Then the system can ask, would that person want to hear something about all moons?
Or if a person answers that Galilean moons are interesting. The command activates the Galilean moon databases.
And the computer can read all three databases one after one. That means the AI might look very smart. But it's nothing more than a series of sound files that are activated by using certain words.
https://bigthink.com/the-present/ai-language-models-gpt-3-pseudo-profound-bullshit/.
Comments
Post a Comment