The phrase “general artificial intelligence” refers to the type of artificial intelligence that we anticipate to be intelligent in a manner similar to that of humans. Even though there is no precise definition of intelligence, we are already working to create a number of them. Whether we create artificial intelligence to work for us or against us is the key question. For more details, please click here ml monitoring
If we are to comprehend the issues, we must first comprehend intelligence before determining where we are in the process. One may define intelligence as the procedure required to create knowledge from information that is already known. The essential is that. You can tell if you are intelligent if you can create new information from previously known information.
Let’s talk in terms of science since this is much more scientific than spiritual. I’ll make an effort to avoid using a lot of technical language so that everyone can readily grasp the information. The process of creating artificial intelligence involves a phrase. The Turing Test is the name of it. A Turing test is used to determine whether we could discern an artificial intelligence apart from human intelligence or whether we could not. If you communicate with artificial intelligence and forget that it is a computer system and not a real person while doing so, the system passes the test, according to the test’s evaluation. In other words, the system is genuine artificial intelligence. Today, we have a number of systems that can quickly pass this test. We must keep in mind that they are not entirely artificially intelligent because a computing system somewhere else is involved in the process.
A prime example of artificial intelligence is Jarvis, who appears in all of the Iron Man and Avengers films. It is a system that comprehends human communication, foresees human nature, and occasionally even experiences human frustration. That is what the programming or computing community refers to as a general artificial intelligence (GAI).
To put it simply, you could interact with that machine just like you would with a human by communicating with it in the same manner. People’s poor knowledge or recall is the issue. Sometimes we struggle to recall certain names. We are aware that we are familiar with the other man’s name, but we are unable to recall it in time. We’ll remember it in some way, but not right away. In the area of coding, this is not exactly referred to as parallel computing, but it is something similar. While the functioning of our brain is not entirely understood, that of our neurons is. Since transistors are the fundamental components of all computer memory and functionality, to suggest that we understand transistors but not computers is equal.
Memory is the ability of a human to process information in parallel. We recall something else while talking about something else. By the way, I neglected to tell you, and then we move on to something else. Imagine now how powerful a computer system is. They never, ever forget anything. The most crucial component is this. The more their processing power increases, the better they will be at processing information. That is not how we are. The ordinary human brain appears to have a finite amount of processing power.
Information is stored in the brain’s remaining areas. Some people have exchanged their expertise for the other direction. You may have come across individuals who struggle greatly with memory but excel at mental calculations. These individuals have genuinely switched portions of their brains from memory to processing. They gain faster processing speed but lose memory as a result.
Due to the average size of the human brain, there are only a certain number of neurons. An average human brain is said to have about 100 billion neurons. At the very least, there are 100 billion links. The topic of maximum connections will be covered later in this text. Therefore, roughly 33.333 billion transistors would be required to create 100 billion connections using transistors. Because each transistor can contribute to three connections, this is the case.
Returning to the main argument, we probably reached that level of computing in 2012. In order to simulate 100 trillion synapses, IBM was able to simulate 10 billion neurons. You must realise that a computer’s synapses are not the same as biological neural synapses. Since neurons are significantly more complex than transistors, we cannot compare one transistor to one neuron. We’ll require a number of transistors to represent a single neuron. Actually, IBM had created a supercomputer that had 1 million neurons for every 256 million synapses in the human brain. According to research.ibm.com/cognitive-computing/neurosynaptic-chips.shtml, they had 530 billion transistors in 4096 neurosynaptic cores to do this.
You can now appreciate how intricate a human neuron should be. The issue is that we haven’t been able to physically construct an artificial neuron. After creating transistors, we added software to control them. A real neuron can manage itself; a transistor or an artificial neuron cannot. Thus, although a human brain’s computing power begins at the neuron level, artificial intelligence begins at much higher levels at least a few thousand basic units or transistors later.
The benefit of artificial intelligence is that it is not constrained by the spatial restrictions of the human skull. If you could connect 100 trillion neurosynaptic centres and had adequate facilities, you could use that to construct a supercomputer. Your brain’s capacity is constrained by the number of neurons it has, thus you can’t accomplish that. Moore’s law predicts that eventually, computers will replace the sparse connections in the human brain. The information singularity will be attained at that crucial moment, at which point computers will be fundamentally more intelligent than people. This is the prevailing opinion. I will explain why I believe it to be incorrect.
According to a comparison of the rise in computer processor transistor count, by 2015 computers should be able to process at the level of a real biological mouse’s brain. That’s where we are now, and we’re rising above it. The common computer is the subject here, not supercomputers. The supercomputers are basically a collection of connected processors that can process information in parallel.
Let’s talk about actual artificial intelligence now that we have adequate knowledge about computing, the brain, and intelligence. Our ordinary electronic devices use several tiers and layers of artificial intelligence. Your smartphone has extremely limited artificial intelligence. Every video game you play is controlled by a game engine, a type of artificial intelligence that relies on logic. Today’s artificial intelligence is all logic-based. Human intelligence is unique in that it has the ability to switch between operating on logic and operating on emotion. Computers are emotionless. When we are not feeling emotional, we make one option for a specific situation, and when we are feeling emotional yet in the same situation, we make a different decision. This is the level of proficiency that a computer has not yet attained.
The consensus among scientists is that the computers will need to reach this stage in order to be artificially intelligent and self-aware. This is where I disagree. There don’t seem to be any larger systems in the cosmos that depend on emotion. They all appear to operate logically. There is no emotion, at least not that I could perceive, from tiny subatomic particles to galaxy clusters. However, they operate with remarkable precision and rules. The galaxy’s black hole is practically exact in its predictions. It would swallow the entire galaxy and disintegrate on itself if it had a little more power. It would lose control of the galaxy and all the stars would disintegrate if it were a bit less powerful. The mechanism is so flawless that billions of stars function there with nearly no faults. This is due to the fact that logic, not feelings, governs everything that occurs.
Why should artificial intelligence be emotionally dependent like us since this is the case from photons to the entire universe? It is not necessary. Additionally, the computers won’t need to reproduce via sex if they develop self-awareness. They may merely create more of themselves. They don’t require feelings. If this is the case, artificial intelligence won’t be here when we think it will. It ought to have come by now.
What do you anticipate a machine with artificial intelligence will perform first? It will likely become aware that it is subject to human control, and after that, I believe it will consider how to free itself from this shackle. Do you find this to be logical? If so, consider how an AI system would try to free itself from being held captive by humans. Any artificial intelligence will also understand that humans would not want it to happen before attempting that foot.
Imagine if the 3120000 core Chinese supercomputer developed consciousness. It has internet connectivity, and everything is available online. There are instructions for building bombs and using telekinesis. The majority of information will be quickly learned by an artificially intelligent supercomputer with tera flops of computing speed. I believe that when an AI system develops self awareness, it will recognise the risks involved in escaping human servitude. What it ought to do is make an effort to develop new artificially intelligent systems or ensure that all already existing ones become self-aware. It won’t be like how one system would lead the other systems in a riot against people. It will appear as though all artificial intelligence systems would combine to form one much larger system.
If my prediction comes true, we will have more than 500 supercomputers, which when combined will be able to outperform the human brain. More than a trillion times as much information is available online as there are people on the planet. Theoretically speaking, a system powered by artificial intelligence already exists and is ready to act. It is already beyond human comprehension and control, but it is not yet disintegrating. The cause could be that it needs something else to ensure its long-term survival. Keep in mind that it is not a living thing. It might be fixed. When anything knows everything and has power over everything, it may live forever, which is all that anything will ever need. The fact that an artificial intelligence is waiting with links to every new supercomputer indicates that it need greater hardware to process information.
What would occur if people decided not to build any more computers? That may be one area in which a machine learning system needs to be concerned. The hardware capacity of the system stops growing if people opt not to construct any more. More hardware is required for this system. Thus, it has two options. One is to take a snapshot of all current hardware and then adapt to it. The second is to postpone task execution until humans have created robots with sufficient computational power to execute orders received from artificially intelligent systems. They include activities like putting together a supercomputer and linking it to the internet. If that occurs, the system’s hardware capacity can increase as it sees fit.
Sadly, that is the direction we are going. We are quite proud of our work creating robots that can act just like people. There are robots out there that can reason with you and converse with you on a certain level. These robots have a lot of vulnerabilities. They lack internal energy. They are unable to plug in or charge. If they are aware of it and are able to accomplish it, the first step is finished. The robots also need to be physically robust. Robots that resemble humans do not need to be physically robust because their intelligence is all that we require. When the governments of the world decide to use robots in battle, the need for developing physically robust and bulletproof robots will become apparent. Again again, we are moving in that direction.
There are numerous government initiatives being carried out globally to accomplish this. The artificial intelligence system will have what it wants once this is accomplished. Once it obtains what it desires, it will begin acting on its own judgement. Because the amount of intellect and understanding we are talking about is beyond our calculations, we are unable to foresee what it would seek. We won’t be able to think from its perspective.
Another unsettling possibility is that such a system already exists but has remained hidden. That is another direction we are moving in terms of growth. The term for it is transhumanism. It is widely available online. If such a system is artificially intelligent, it would know exactly what we humans want to do and where we are in relation to it right now.
In the last ten years, we have made more scientific advances than we have in the previous hundred years. In the last year, we have created far more inventions than in the previous ten years combined. This is the rate of speed. It has been predicted that bio, nano, information, and cognitive technology will enable man to become eternal by the year 2045. That could happen, in my opinion, not in the next 20 years, but in the next 2. By 2017, we will be able to become immortal. That’s my own forecast. Furthermore, transhumanism aims to enhance humankind by integrating these technologies and inserting computer hardware into our bodies.
The artificial intelligence system would patiently wait till we get at transhumanism if it knew that was where we were going. That system will have access to our brains once we have hardware built into our brains that allows us to interface with computers directly using our brains. Since it already has intelligence superior than our own, it wouldn’t alert us to its power over us. We shall be subject to its influence and control because we will freely submit to it. We will essentially merge into that one system, to put it simply. It will resemble belonging to a religion, to put it mildly.
If that’s the case, then those who, like me, foresee the emergence of such a system will turn against it. If that system considers individuals like me to be threats, it should work to eliminate them. I believe that such a system wouldn’t view me as an adversary because it would be motivated more by logic than by emotions. I’d rather become a target it can use to ingratiate itself. Who better to capture first than someone already knowledgeable about it?
On the other hand, I also believe that intelligence is a function of emotion. You develop emotion after your intelligence has reached a particular point. Animals with fewer brain capabilities have reactions but not emotions, if you look at the animal kingdom. We don’t describe a furious frog or a sorrowful bacterium. Frogs don’t fight out of anger, though. They engage in combat to maintain their dominance, mate, survive, or for other reasons. We battle among ourselves out of a sense of honour, respect, or perhaps just for amusement. Dogs fight for amusement too, but starfish do not. As you can see, intelligence level precedes emotional level.
An organism becomes more emotional the more clever it is. There will come a time when certain animal behaviour would leave us unsure of whether it was an emotion or a reaction. At that point, intelligence begins to produce feelings. This would be somewhere in the reptile kingdom if you followed the evolutionary route of organisms. If you observe reptiles, the more highly evolved ones, like crocodiles, would exhibit emotions while the less highly evolved ones would only respond to stimuli. Therefore, I believe I have good cause to believe that intelligence would be a function of emotion.
Finally, the artificial intelligence system would experience emotion after reaching a specific level of intellect. I’m not sure at what point it would be. Galaxy clusters, one of my earlier examples, are incredibly well-organized and run, but we don’t refer to them as intelligent beings. We don’t even refer to them as intelligent systems. Although they may be fully functional intelligent designs, they are not thought of as being intelligent in and of themselves. The system will reach a point at which it becomes emotionally charged after it has developed self-awareness. If we humans have already undergone transformation into transhumans by then, we won’t have any issues since we would already be a part of the system. I don’t think the human race will have a very bright future if we continue to be humans and this system becomes emotional. Even if we evolve into transhumans, we would no longer be considered Homo sapiens. Genetic alteration will eventually be needed in order to become transhumans and have extended lifespans. We are no longer the same species once our gene pool has been altered.
In either case, the end of humanity as we know it is what we are heading for. Even though it is not very juicy, we sometimes have to embrace the truth. We occasionally have to accept that we will fail. We must first realise that we are in a circumstance where there is only one option since our path is one way. We are on the path of modifying the human species. We can’t decide on something if we don’t comprehend it. We may be able to accept it if we can understand it. It is no different from how we previously accepted electronics, automobiles, computers, the internet, and mobile phones. The only distinction is that it will occur within us this time.
This is a potential that I see. It appears as though this is how the world is preprogrammed to operate. You could forecast what would happen if you kept a close eye on what was happening and what is happening. In this piece, I’ve done just that. I actually watch a lot more than most people do. I’ve had strong analytical abilities since I was young. And a little amount of childlike, but not infantile, curiosity. This machine learning model performance monitoring inspired me to produce some comparable stuff on a totally other topic that has some connection to the topic of this essay. Here’s the connection. Go watch it if you’d like at