The Rise of Artificial Intelligence: A Reflection

As I noted in yesterday’s post, I attended a conference at MIT called The Rise of Artificial Intelligence. I’m glad I got to attend because there were a lot of great speakers and interesting views to hear. While I won’t be able to address everything from the conference in this post, I thought I’d at least give you some of the key themes that were discussed and important points I took away from the event.

The first theme from the conference, and arguably the most important one, is the exponential growth of data and its impact on artificial intelligence. In almost every discussion you hear about AI, someone will point out that people have been saying AI is only 10 years away for 50 years. This is true but it is also true that this time it’s different. And the reason it is different this time is because of the explosion of data. There are billions of messages, videos, images, etc. shared on a daily basis today and it’s only continuing to grow. Because of this explosion of data, developers are able to better train computers to learn and recognize what we are saying, what’s in a picture or video, and what’s important or not important.

The second theme is transparency. Rightfully so, many people are concerned about what will happen with AIs in the future as their abilities increase. One way to ease our concern on this topic is to be transparent about what the AIs are doing. If we show everyone how an algorithm works and what the paths are that a computer takes to come up with a particular decision, we can have better oversight and make decisions as a population rather than leaving it up to individuals or companies that may not have aligned interests.

Another theme from the conference is using AIs to enhance human ability. I think this often gets lost in the discussion of artificial intelligence. Many assume that we are doomed because computers are continuing to get smarter and better while human evolution progresses much slower. But I think what they are missing is the aspect that computers were made to help us out, not replace us. I recently wrote a piece that talks about this point and I am even more confident in that view after attending the conference. AIs are being created to amplify our abilities, not to replace them. One way to see this is by taking a less selfish look at the world. Take someone with a disability, either physical or mental, with the use of artificial intelligence and enhances in computing, you can remove their disabilities entirely. That’s the type of enhancement we should be encouraging, not fearing.

Maybe the most interesting theme of the conference was the one of gender. At a conference where the majority of the speakers and audience were males, it could have easily been lost. During one of the panels, however, someone from the audience asked an interesting question about gender and AIs, why do the majority of these “serving” bots/assistants have female names/voices? There were some who cited studies done on the subject of how people respond to a female voice vs a male voice but for the most part I think no one had a good answer. It’s a really interesting question to think about and one that could be really important to address going forward.

The last theme I want to share is the one that seems to be at the top of everyones mind both at the conference and in general. That’s the question: what does the future look like if we don’t have jobs? I would say almost every speaker at the conference acknowledged that at some point in the future, AIs would become so advanced that they would be able to do everything for us. This obviously makes a lot of people nervous and raises the question what are we to do if that’s true? Many people had many different proposals from basic income for all to a new focus on the arts. But this also raises the question of whether we should control the advancement of AI? I thought Ray Kurzweil gave an interesting response to this question when he said there may be risk in advancing AIs but it is our moral imperative to continue to develop them because there are so many horrible problems out there (cancer, poverty, disease, wars, etc) that we have yet been able to solve. You may disagree with his view but it is an interesting one to think about.

One question that I thought about in respect to this topic of a future with intelligent AIs is whether or not we are limited by the concepts and systems we already have in place (ie capitalism, money, democracy, etc)? Many discussed trying to take the things we already have and mold them to a future where AIs are more powerful but few discussed coming up with new systems that would replace the ones currently in place.

At the end of the day, it seems inevitable to me that AI will be very important over the next decade. An increasing amount of businesses are incorporating the technology, meaning the ones who don’t will be left behind and eventually go out of business. Some more evidence that AI is the future is that a large number of computer science students today are studying AI. Much like after the introduction of the iPhone in 2007 and the explosion of focus on developing mobile apps afterwards, there is an explosion of students studying AI now.