Episode #03 Section 3
Caution vs. Innovation
What influence does technology have on the fabric of society? Is it a curse or an opportunity for progress? Today William and Steven discuss the effects that artificial has had and might yet have, the dangers and how to ensure that life with machines runs harmoniously with common values.
Steven: Regulating machines and regulating humans, that's the same: Who says that's okay, who says that's not okay? And then in certain parameters we have governments for obviously the social side of things. But for machines and for the AI that we were talking about last episode and in this episode: Who is regulating this? Who is actually, what safe parameters are there out there, but actually stop us going too far, if there is a thing?
William: Yeah, and often you don't know what is too far until it's behind you, in the past, already happened. So the scientific method would be: let's test it first in controlled environment, until we know how this system works, and we can predict better. But then people are not that patient. They say "No, that's too limiting. We can't wait that long. We want to live our lives now. It's difficult to know what's right and wrong, what will be the consequences of laws and regulations, if they are enforced. So there was this debate on Twitter several months ago between Mark Zuckerberg and Elon Musk.
Steven: Yeah, you mentioned that towards the end of last episode.
William: Okay. Because Elon Musk is someone who is very aware, I would say. At least he's put a lot of thought into the possible outcomes of the progression of technology and in particular AI the weights going now. And I don't know how the beef started, but I can see how Mark Zuckerberg would be the opposite, how he is always open for innovation. He will just go and do it, and ask questions later. I mean he's been in front of Congress at least twice now. I saw a summary of the last one just a few days ago, answering the concerns that senators and congresspeople have. This time it was about his new digital currency which he called a crypto currency. But it's not decentralized, so I wouldn't call it that. Libra, have you heard of it?
William: Well Facebook wants to come out with their own digital currency and, of course, the government is afraid of that because of all the advantages they have of being in charge of the money in the world. And that would be very much lost, I don't know the word, compromised, probably very much compromised.
Steven: Good word, yeah.
William: Okay. Elon Musk is someone who believes that we should not just go go go and advance AI as much as possible, as fast as possible because, yeah, even though obviously there are advantages like comfort and just produceability, productivity, making money...
Steven: Yeah, if a machine could do everything we can,, well we don't have to do anything. So we let them do it, and then we get a nice and easy life.
William: Right. He's not afraid of that. That is a concern of some people as well. He's just afraid of the destructive power. It's a bit like the Cold War, you know, that the Soviets and Americans just kept building up their arsenal so that they could have blown up the world five times over or so, and in the meantime it's probably a lot more than that, so that's not necessary.
Steven: Whoever gets there first will have the power and therefore have the conrol
William: Yeah. Julian Assange has said the the more powerful AI gets the more it matters in whose hands it is. And he also emphasized cryptography. So the person who is the best at protecting their data by encrypting it will have more power over those people who are not as good at encrypting their data. Because there will always be this back and forth: I have protected my data with this program. Someone else manages to hack it. So I need to update my protection. And so on. And AI will obviously be used to hack encryptions. So that's an aspect.
Steven: That is fascinating. It's something I haven't thought that much about, actually. Because I mean I've watched Mr. Robot. And again that concept of, you know, hacking and counter-hacking, and protecting yourself, cyber-security. I don't think we appreciate, at least I know I don't, what is going on behind the scenes of every country as they progress their technology, computing technology, how to protect all their data. And then through, I guess, the desire to keep ahead of the game, and trying to hack other countries, and try to find out all they're doing. That's quite terrifying, actually. If you ... just ignore it.
William: Yeah, we just live in our blissful ignorance. And we don't even know how many people are working to keep us protected. I mean some of them are really interested in doing that. You probably know my general opinion of military and intelligence community. But there are good people among them. Sometimes they're called the white hats or ethical hacker, stuff like that, you know, who want to protect us and not exploit our our data or money or talent or workforce. Militaries usually have these these digital / AI departments now, because the digital world is just as militarized or just as important and dangerous as the physical.
Steven: So in terms of AI, the terminology, so we brought this up in the last episode and I've forgotten. Because in my head AI is always the peak consciousness. But these days, in the last five years, they're using AI in lots of different contexts. What is the definition, I guess, of it? Is it just anything that can create its own little systems?
William: Wow, no. You're going way beyond the way that I've been using the word. Even when you said "peak of consciousness" you're talking about like really self-sustaining artificial life-forms, right?
Steven: Because in general sci-fi that's always as it's used: an AI is a computer or machine that can...
William: Ok, it's good that you ask the question, then, since there can be very different interpretations. Sci-fi is usually futuristic, right? So it's talking about the potential of the thing we have now. Is independent life forms, complicated relations between machine and man, stuff like that, sure. But right now we use AI pretty much synonymously with machine learning. And that term is a bit more established in science and engineering. So a system is intelligent whenever it can just make decisions that are not obvious and that are not totally explicitly told to it by a human.
Steven: So what, you code it and then you give it access to a certain amount of data, and it can just learn what it wants to from the data?
William: Exactly that's the state of AI currently. And that doesn't sound very advanced, right? But the normal approach used to be that you give it rules and it will make decisions based on those rules. So decision trees is an example. Something like, when it's it's helping you to decide what to wear before you leave the house to be prepared for the weather. So if it's a certain temperature then do this, or if it's certain humidity, or what has been the record of the weather in the last few hours and days, and can we predict the weather for today and tomorrow and so on, based on that? So if you just give it explicit rules like "if the temperature is above twenty degrees, do this" or "if humidity is high or the the sky is very cloudy take an umbrella", stuff like that. That is rule-based decision making. Bow the new thing is data-based inference. Inference is just like predicting based on what you have now, what should you do? Prediction in AI doesn't always mean "what will the future be like?". It just means given this data, what is the best outcome or the best advice? So yeah, data about temperature humidity and so on, we no longer claim that we know what is the best action for which condition. We just say in the past under these conditions this was done, and this would have been ideal. So now, given the current set of conditions, what should we do so that we will have the best outcome. And the algorithm will go through the data, learn patterns, pick up on regularities, you know, implicit rules that were just part of the nature of the topic, of the problem.
Steven: The great thing about that is the chaos element. Because you have people's personal feelings about certain color or that's their favorite top; so they're gonna wear that even if it is raining, so I'm still gonna put that on because even if I get wet, I still want to look good, blah blah blah, which machines must get very confused with.
William: Yes. The more human elements come into the problem, the more unpredictable it becomes. That is predictable. But anyway...
Steven: That was a good breakdown. That was a really good breakdown for me, personally just the difference.
William: You've probably heard the term "neural net". That is the architecture of the algorithm that is most used these days. It's very powerful. It's not new at all. It was it's been around for at least thirty years and probably more. But it's back in style now and used almost ubiquitously in computer science, because the hardware has improved, just because computer have gotten faster and stronger. Not because the idea, the mathematics behind it has improved that much. We are improving that, too, now that you can test things in a lot less time than you used to. But the core idea is still the same.
Steven: What's interesting is that the idea advances the technology, and the technology advances the idea
William: That's true. That's kind of like the first questions we asked, right? How do we change when technology changes; and I think we should. But then, if we change in a bad way, should we really progress in technology?
Ep. 31: Learning to differentiate thoughts, emotions and sensations
We have thousands of thoughts and feelings each and every day. How many of them are we aware of? How many of them remain subconscious and unnoticed? Are you content with the thought patterns installed in your subconscious and the resulting emotions? Do they work in your favour and motivate...
Ep. 30: Choose your own Purpose in Life
Are you free to create your own path in life? Or are you following a script? Everyone is in both situations to one degree or another. The more we choose to drop beliefs about who we SHOULD be and what we SHOULD be doing with our time and energy, though,...
Ep. 28: Our Instinct for Revenge
How would you react if you or a loved one were to be violated? Would you be blinded by rage, seeking nothing less than brutal vengeance? Or would you be able to distance yourself from that terrible experience and be content with the judicial process taking its course? Could you...