Removing unnecessary work makes high-power computers more effective.
A new type of data analysis boosts the calculation power. The idea is to use tensors that connect the operands. If we multiply something with zero the answer is zero. And even if the calculation inside the brackets is very complicated, but the entirety is multiplied with zero like this (X+Y+z)*0=0. We can scale this thing to all calculations the result is always zero.
That means multiplying something inside brackets with zero means zero, and that means we can leave it without attention. In AI research the value zero can mean that the system removes the calculation, or it can wait for the next data packet to get more information. The values zero and one can mean that in zero system waits. And if the answer is one or more. The system continues to the next operand.
Machine learning requires that the computer can use multiple data sources. That requires the ability to handle very large data masses. One way to make that operation easier is to remove unnecessary data from that mass. And the thing how researchers can make that thing is simple. They can remove all zeros from the operand. If the result of some calculation is zero that means the result is "empty". This data-handling protocol can also in robotics to make its reactions faster.
If the robot has three computers that act like the human brain. Two main computers and a third computer make sure that the system cannot get stuck if two main data handling units get different answers. So if the system removes all cases from the third computer when two main units get the same answer that removes unnecessary work. In this system, the third computer selects an answer, that it uses from the different answers that the main computers give. But that is important only when two main components get different answers.
"MIT and NVIDIA researchers have created two techniques to enhance sparse tensor processing, improving performance and energy efficiency in AI machine-learning models. These techniques optimize zero value handling, with HighLight accommodating a variety of sparsity patterns and Tailors and Swiftiles maximizing on-chip memory utilization through “overbooking.” The developments offer significant speed and energy usage improvements, enabling more specialized yet flexible hardware accelerators." (ScitechDaily.com/New Techniques From MIT and NVIDIA Revolutionize Sparse Tensor Acceleration for AI)
"Components of the Cauchy stress tensor in Cartesian coordinates" (Wikipedia, Tensor).
"Innovative new chip technology integrates data storage and processing, significantly boosting efficiency and performance. Inspired by the human brain, these chips, expected to be market-ready in three to five years, require interdisciplinary collaboration to meet industry security standards." (ScitechDaily.com/Innovative new chip technology integrates data storage and processing, significantly boosting efficiency and performance. Inspired by the human brain, these chips, expected to be market-ready in three to five years, require interdisciplinary collaboration to meet industry security standards."
So if two main computers get the same answer. It's not necessary to send that thing to the third computer.
Removing unnecessary work makes high-power computers more effective. However, the problem is how the system selects unnecessary work. The system must have certain parameters to detect if the operation is unnecessary.
One method is to find zeros from the data mass. If zero scales itself over calculations and calculation series. That means the operation is empty. The system can remove empty operations. That doesn't cause action. And that saves time and work in complicated and heavy calculations.
Things like quantum computers require high-power binary computers. The data will driven to qubits by using AI-based binary computers that control qubits and their states. The quantum computer itself is useless. It requires a binary level that drives data input into the quantum system. And if those systems operate without friction that revolutionizes computing.
If we think that the AI or some other computing system has two sides that operate with the same problems the zero can mean that there is no conflict. In this model, two computers or data handling lines are operating with the same problem. And when they introduce their result the system will make a subtraction between solutions.
If the answer is zero, that means the system has no conflict between answers. That allows data compilation that sensors bring to the system as well as the effectiveness of data storage.
In the secured system, the data storage can store information about things like when the file is open. And things that the file involves. The system can make two backup copies of the files, and if somebody tries to change data in those files or change the file. That system tells that thing to superior officials who accept or deny those changes.
https://scitechdaily.com/new-techniques-from-mit-and-nvidia-revolutionize-sparse-tensor-acceleration-for-ai/
https://scitechdaily.com/twice-as-powerful-next-gen-ai-chip-mimics-human-brain-for-power-savings/
https://en.wikipedia.org/wiki/Cauchy_stress_tensor
https://en.wikipedia.org/wiki/Tensor
Comments
Post a Comment