Should we be concerned because the product liability directive, PLD doesn't include immaterial damages like violating privacy or reputation? Those things were not mentioned as problems when the EU made the PLD directive. But today we have new tools that collect information from our behavior. AI-based systems can make realistic-looking people, who can make things. That, those real people don't ever make. And that can cause at least embarrassing situations.
Who takes responsibility if somebody makes a film tape where some prime minister robs a bank, etc? The big question with AI is should the recognizable images that portray certain humans be prohibited or otherwise denied from the AI? The problem is that the AI makes images by following the orders that the user gives. And those things mean that some people can simply give the details of the neighbor for the AI. And then the AI makes the image, there is the neighbor's face.
When we think about the PLD directive and other directives that should protect us against product malfunctions, those directives do not include things like normal blogs. There is the possibility that if some people travel to China, somebody writes the manifest in the name of that person, where that writer justifies the Tiananmen case and human rights violations in China. That blog can cause very big problems at the border zone.
The thing is that the AI is the new tool that can make many things that ordinary systems cannot make and the main problem with the AI is what is not told about that thing. AI is the tool that allows people to show their creativity. But the problem is that AI can be misused for cheating people. When we think about newspaper articles, where people made pedophilia porn using AI, we must ask ourselves, what is the limit between privacy and security? When the AI should track the person who uses it, and then report the action to officials.
There are lots of things. That people should know when they use some products. Those things involve privacy and other kinds of stuff, but another argument is this: what if somebody creates sick stuff using AI? Another thing is that there is a race between East and West. Who makes the best AI? The AI is the tool that connects different software under one dome. In the same way, it connects many other things like satellites and airborne, underwater, and ground systems to work as one large macro-scale system.
The thing is that the Eastern governments are interested in the AI's military, intelligence, and surveillance abilities. The biggest problem is that the AI is that. There are no limits in the East for development work with AI. The Eastern authorities allow unlimited data use in that process. They don't care about copyrights or other things that slow the R&D work. AI is the next generation weapon.
It can generate malware faster than any programmer can do. The AI can use it to collect data from social media, and then connect that data from other data sources like names that intelligence catches. The AI can search the entire social media to find the people with the same names. And then it can search photos if there are some things like uniforms. That marks the person as an interesting target for intelligence.
Reporters and social media influencers are also people, who can serve Eastern intelligence and propaganda. We must have the tools to fight back. The AI can steal people's identities. So we can try to give rules for those systems. Laws are weak protection if the attacker operates outside the AU area from China or Russia. The Eastern nations and authorities don't care about laws in the same way as we used to care for and follow them. We can slow down or stop AI development by giving regulations. And then we can remember the Great Wall of China. That wall stopped the technical development and advance in China.
That caused a situation where European countries just marched to China in the late 19th. Century. In that situation, those armies faced a feodal army. That army couldn't resist the modern European armies. And if we don't think about regulations carefully, those things can do the same thing to Europe that the Great Wall of China did to China. We know that we need regulations. But if we do not think about those regulations carefully, we face the situation that we cannot respond to AI espionage.
Things like data systems' remote use allow users to run large language models LLMs from a great distance. Wrong regulations cause dangers. And if we just believe people and what they say, we can let the largest Troyan horse in our systems. The regulation is always a problem. The remote use of the systems allows the R&D to work for the customers over the Atlantic. The VPN-protected cloud-based systems allow. To operate laboratories remotely. That allows developers to make computer software development tools for the customer from their homes. Regulations are ineffective if nobody follows them.
The customer can expect something from the data security. The problem is that many customers don't know anything about the programming or data leaks. And other kinds of things. Sometimes they expect the deliverer or some authorities to make the data security work for them. There is always one big question about data systems. That is what the system maker doesn't tell people. The "open source" means that the customer can check the source code of the program. But checking that thing requires knowledge of programming. The customer might not have the skills to ask.
Questions what they should ask. Computer programs, including AI, are always connected with the environment where they are made. The state where the programmer works can order or force that person to put malware in the code. In the West, we used to think that authorities arrested hackers. We cannot even think that some governments support hackers, and give them expensive tools to make their mission. Hacking that happens under state control was unknown to us until some hackers stole defense secrets from the USA. Those hackers were tracked to China. They are still free because they worked under the control of China intelligence.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.