Artificial Intelligence: Mankinds Last Invention
Go is arguably the most complex board game in existence. Its goal is simple, surround more territory than your opponent. This game has been played by humans for the past 2,500 years and is thought to be the oldest sport game still being played today. However, it's not only humans that are playing this game now in 2016 who cool deep minds, AlphaGo beat 18 time world champion, Lisa dull in four out of five games. Now normally a computer beating a human at a game like chess or checkers wouldn't be that impressive, but go is different. Go, cannot be solved by brute force. Go cannot be predicted. There are over 10 to the 170 moves possible and go to put that into perspective, there are only 10 to the 80 atoms in the observable universe. AlphaGo was trained using data from real human gogi aims. It ran through millions of games and learned the techniques used and even made up new ones that no one had ever seen.
And this is very impressive alone. However, well many people don't know is that only a year after AlphaGo's victory over Lisa Dole, a brand new AI called AlphaGo zero beat the original AlphaGo, not in four out of five games, not in five out of five games, not in 10 out of 10 games, but beat AlphaGo 100 to zero 100 games in a row. The most impressive part, it learned how to play with zero human interaction. This technique is more powerful than any previous version. Why? It isn't restricted to human knowledge. No data was given, no historical figures were given with just the bare bones rules. AlphaGo zero suppress the previous AlphaGo and only 40 days of learning and only 40 days. It's surpassed over 2,500 years of strategy and knowledge. It only played against itself and is now regarded as the best go player in the world even though it isn't human.
But wait, if this AI learned how to play without any human interaction made up strategies of its own and then beat us with those strategies, then that means there's more non-human knowledge about go than there is human. And if we continue to develop artificial intelligence, then that means there's going to be more and more nonhuman intelligence. Eventually there's going to be a point where we represent the minority of intelligence, maybe even a very minuscule amount. That's fine that we can just turn it off, right? It's a thought, but think when modern day humans began to take over the planet, why didn't the chimps and the Andrew falls turn us off? If this artificial intelligence becomes super intelligent and learns through and is connected to the internet, well, we can't just shut down the entire internet. There's no off switch. So what happens if we end up stuck with AI that is constantly an exponentially getting smarter than we are? What if it gets to a, that us humans get in the way and the AI hits the off switch on humanity?
One task is to kind of AI that AlphaGo is narrow. AI is good at speech and image recognition and also at playing games like chess or go or even pretty complex games like Dota two at the international 2017 world championship open AI, artificial intelligence destroyed pro player Dendy to zero but much like AlphaGo zero. It wasn't taught how to play the game. It played out millions of years worth of one versus one matches against itself and learned on its own. It started up barely knowing how to walk and eventually as time went on, it surpassed human level skill. If you use Spotify, you'll see that it creates daily mixes for you based on the music you listened to Amazon learns from. It teaches itself. Your buying habits suggest you new products, but it seems like a common reoccurance where these AI teach themselves how to do the task at hand.
How is that even possible? Well, through something called machine learning, machine learning is the science of trying to get computers to learn and think like we humans do. Machine learning is essentially the same way that babies learn. We started off as small screaming sex of meat, but over time we improve. Our learning. We take in more data and more information from observations and interactions and most of the time we end up pretty smart. The most popular technique out there to make a computer mimic a human brain is known as a neural network. Our brains are pretty good at solving problems, but each neuron in your brain is only responsible for solving a very minuscule part of any problem. Think of it like an assembly line where each neuron in your brain has a certain job to do in order to completely solve a problem. Let's make a simple example of a neural network.
In order to say that someone is alive, they have to either have a pulse or they must be breathing, but not necessarily both at the same time. This yellow dot represents a neuron in a neural network. It functions just like in your own. In your brain does it takes an information and then gives an output. If this neuron tasting the information that says, Hey, this person has a pulse and is breathing, then the neuron deciphers this information and says, okay, this person is alive. It learns to analyze situations where the human would be declared alive if it's breathing or has a pulse in situations where the human would be dead or neither of those is true. That's essentially a bare bones explanation of how it works. Of course, no neural network is really this simple. Many have millions of parameters and are much more complex than just this one layer network.
The world is full of sounds and visuals and just data in general, and we take in all of this to form our view of reality. However, as more and more complex topics show up with more and more data, it becomes harder and harder for humans to do this analysis on their own. This is where machine learning comes in. Handy machines can not only analyze data given to it, but also learn from it and adapt its own view of it. Let's go back to AlphaGo zero it only 40 days. It surpassed thousands of years of strategy and knowledge and even made up some of its own strategies, but how did it do all of this so quickly? Biological neurons in your brain operate at about 200 Hertz. That's proved to be fun for us, but modern transistors operate that over two gigahertz and entire order of magnitude faster.
Those neurons in your brain traveled through what are known as Exxon's and they travel at about a hundred meters per second, which is pretty fast and gives us pretty good reaction time. But it's only about a third as fast as the speed of sound. Computers, however, transmit information at the speed of light or 300 million meters per second. So there's quite a big difference between our brains capabilities and to computers. In just one week, a computer can do 20,000 years worth of human level research or simulations or anything that is trained to do. A brain has to fit inside your head. There's a limit to how much space it can take up, but a computer could fill an entire room or even in an entire building. Now, obviously weak AI doesn't require an entire server room to run like we saw with open AI. It only took a USB stick, but for more intelligent AI, it may require much more power.
Artificial general intelligence or AGI is AI with a motive than a single purpose. AGI is almost at or equal to human level intelligence and this is where we're trying to get to, but there's a problem. The more we search into it, the harder and harder it seems to be able to achieve. Think about how you perceive things. When someone asks you a complicated question, you have to sort through a ton of unrelated thoughts and observations and articulate a concise response to that question. This isn't exactly the easiest thing for a computer to achieve. See, humans aren't able to process information at the speed of light like computers can, but we can plant things. We can think of smart ways to solve problems without having to brute force through every option. Getting a computer to human level thinking is hard. We humans can create things. We invent things.
We create societies and play games and laugh. These are all hard things to teach a computer. How can you teach a computer to create something that doesn't exist or hasn't even been thought of at what would be its incentive to do so? I believe AGI or strong AI is the most important artificial intelligence to be created, and here's why. Machine learning is exponential, meaning that it starts off rather slow, but there's a certain tipping point where things start to speed up drastically. The difference between weak AI and strong AI is millions of times larger than the difference between strong AI and super intelligent AI. Once we have the artificial general intelligence that could function like a human being, for the most part, at least it may help us reach super intelligence level and only a few months or perhaps even weeks. But here comes another big problem. See, many people tend to see intelligence on a graph like this.
We have maybe an ant here, a mouse at about here. The average human here, and maybe Einstein right here. Just above. If you asked, most people were super intelligent, AI would lie on the Scruff. Most would probably put it somewhere around here, but this just isn't the case. Although AI might not be at human level intelligence yet, it will be one day and it won't stop it. Human level, it'll most likely just zoom past and continue getting on more and more advanced until eventually the graph looks something like this. This is what is known as the technological singularity, where artificial intelligence become so advanced to the point where there's an extreme explosion of new knowledge and information, some that might not even be able to be understood by humans. If we make a super intelligent AI that AI would be able to improve upon itself and it turned, get smarter in a shorter amount of time, which means that this new and improved AI could do the same thing.
It continues to repeat this process, doing it faster and faster each time. The first recreation may take a month, the second a week, a third a day, and this keeps going until it's billions of times smarter than all of you humanity. Compared to chimps, we share about 96% of their DNA. We are 96% ship, but in that little 4% we went from being extremely hairy and mediocre. Primates to a species that has left the planet. We're a species that plans to colonize Mars, a species that made missiles and millions of inventions all compacted down into that 4% DNA difference. The number of genetic differences between a human and a Chimp is 10 times smaller than the genetic difference between a mouse and a rat, but yet only being 4% different from them. Their entire fate seems to lie in our hands. If we want to, for some reason put a McDonald's on a chimpanzee habitat, we just do it.
We don't ask. There are 5.5 quadrillion ants on the earth. Aunts outnumber us 1 million to one and yet we actually live like they don't exist at all. If we see one crawling on a table, we just smush it and move on with our day. What happens when the day comes where AI becomes the humans in the situation and we become the ants? Superintelligent is different from software we know today. We tend to think our software is something that we program into computer will follow our rules. But with highly advanced AI and through machine learning, the AI teaches itself how and what life is as well as how to improve it. After a while there is no need for any human interaction. Perhaps humans would even slow it down as opposed to speeding it up. We have to be careful about how we make it as Sam Harris stated.
And the first thing you want to do is not give it access to the internet, right? You're what you use. You just got to cage this thing, right? So because you don't want it to get out, right? So, but you want to attempt it. You want to see that CIF, if it's trying to get out, how do you know whether it is or this or this is called a honeypot strategy where you tempt it to make certain moves in the direction of acquiring more power.
For example, if we're somehow able to give a super intelligent AI orders and it follows those orders, it may just take the quickest and easiest route to solve them. Just because we make a super intelligent AI. That doesn't mean that it's going to be wise. See if there's a difference between intelligence and wisdom. Intelligence is more about making mistakes and acquiring knowledge and being able to solve problems through that wisdom. On the other hand, it's about applying the correct knowledge in most efficient way. Wisdom is being able to see beyond the intelligence gained and being able to apply that to other things in hopefully a productive way. If we give AI in order to solve world hunger, while the easiest way to solve world hunger is just to kill all life on the planet and then nothing would ever be hungry again, but obviously that isn't what we want.
We would have to somehow teach the AI to have human like values, a moral code to follow and somehow work around that. In a way we'd want it to be like a super intelligent human cyborg. You may not even realize it, but the majority of humans are already cyborgs. Humans are already becoming less biological and more technical. We already put our minds on the non-biological things. How well many of you are watching this video on that device right now. Phones, your phone has become an extension of yourself. It can answer any question you could ever ask at a moment's notice. There's only one problem though. Your inputs are just too slow. If we'd be able to create a high bandwidth link between the brain and the internet, we would be literally connected to everyone in everything on the entire planet. This is what nearer link is aiming to do.
Create what is known as a neuro lace. Your brain has two big systems, your limbic system and your cortex and these two are in a relationship with each other. Your limbic system is responsible for your basic emotions, your survival instinct. Your cortex is responsible for your problem solving skills, your critical thinking, your Lincoln aiming to create a third layer to this. The AI would be like a third wheel in this relationship, but would increase our capabilities by multiple orders of magnitude. We all would have identic memories, picture perfect memory. We would have access to all the information available to the world and be able to access it instantly. With the of this third layer, people may eventually realized that they liked this new found knowledge and way of living more than the alternative, more than living without this newfound AI that has been installed into them.
Eventually people may decide to ditch their human body, their biological self, in favor of the artificial world. This of course is a long ways away, but it doesn't mean we shouldn't think about it. There are of course, many problems in counter arguments that come up and I'll address some of them now for starters, how do we program something like consciousness? We don't even know what it is yet. How do we program something like a limbic system into our machine? Something that has a sense of fear,
perhaps even a fear of death. Can a machine or an AI really love someone or show that emotion? Because if it can love it can also hate which could raise big problems first in the future. This is a fear of many people. When the idea of super intelligent AI is brought up, the biggest and most pressing question is, will it be pleasant or will it be hateful?
Could it even be other of those things in the first place? And more importantly, is it even necessary? Sure. A super intelligent AI would have all the information in the world. I just disposal. But then again, so do you and I so do all of the people in the ward with very radical views. Who's to say that a super intelligent AI wouldn't adopt these juices, don't and then decided to execute on those instead of what we had planned. Sure. It's possible that a superintelligent AI could approve upon itself to using knowledge gained, but then again, it isn't pulling random improvements out of thin air. It's not just going to unite quantum mechanics in general. A little tip, dude, what's the wrong mathematics? What if some of its upgrades, new experiments to be run? What if it needs more information? Is it just going to make it up?
This leads into what if the AI learned to lie and then decides to lie to us about its accomplishments for its own selfish reasons. It's also possible that the progress of computing will slow down and it looks like it already is. I mean look at airplanes and the past 100 years we went from this to this, but like many things, there's a threshold that is hard to get test, but if we do get past this barrier, the upsides could be tremendous. It's almost obvious that for intelligent AI has the capability of making humanity billions of times better than it is today, but on the flip side, it could also be used as a weapon. Just think, like I said before, if this thing is running for a week and as access to all of the world's information, it could make over 20,000 years of technological progress.
You let this thing run for six months and you could potentially have 500,000 years or even more of technological progress. Imagine if this got into the wrong hands and imagine the repercussions it would have when you Google something, not only are you giving out data and telling a potential artificial intelligence what you're thinking, you're also telling it how you're thinking. We feed it all of our questions, all of our answers. If we were going to try and program a limbic system into an AI, we will be teaching an artificial intelligence what we're afraid of. Out of all the data in the world, over 90% of it has been created in the past three years. AI learned how to recognize faces. It learned how to recognize voices. It learns languages, and then will eventually translate between them all seamlessly. This knowledge and information trend could continue until it has went through all 100% of this data.
This is why artificial intelligence is such an important topic. Super intelligent AI is the last invention that humanity will ever make once it's invented. It can't be un invented. The technology and developments that could possibly turn the human species into a mortal godlike figures is coincidentally the same technology that could also cause the downfall of humanity. Many people are resistant to the AI because they fear it may have some of the negative traits that we humans do. It may be greedy. It may lie, it may be selfish. It may be rebellious. It may be short tempered. The power of super intelligent AI is there patiently waiting to be found, but the question is do we actually want to find it?
If we want the future to be perfect, if we want to create the perfect super intelligent AI, we need to be prepared. We need the smartest minds and the best strategies in order to create the perfect super intelligent. AI. brilliant.org is helping turn people like you and I into the pioneers of the future of humanity. Topics such as machine learning and artificial neural networks aren't exactly the easiest things to grasp or affiliates does a great job at explaining each bit piece by piece. You can go and barely knowing anything about a topic and come out with a solid grasp on it before the day is over.
If you're like me and are interested in the future of artificial intelligence, you go check out billions of courses on machine learning, artificial neural networks and statistics. You'll be surprised on how easy they make it to learn.
Thank you so much for this nice information.
ReplyDeleteText Analytics Companies
Text Extraction Solutions
Text Classification Software
Text Summarization Solutions
This post gave me a lot of information on this topic. Keep it up and keep sharing this type of information with us. Try to explore our services towards digital transformation.
ReplyDeleteDescriptive Analytics Solutions
AI/ML Solutions
Business Intelligence (BI) Solutions