Artificial Intelligence

Posted by kdow on Mar 29, 2017 8:58:46 AM


If I was to call out one subject that I’m obsessing over in 2017, it would be A.I. My peers are focused on the existential crisis of how to get done in an era where our “smart” phones control much of our very being. But I think our phones, “smart” homes, light switches and everything currently touted as “life automation” are basic stuff. The only thing that ever stopped those existing in 1999 was economics. They are not intelligent.

I’ve been reading furiously about A.I., absorbing information ranging from passive pseudo-intelligent actors like chat bots to hyper intelligent decision-making from within the halls of DeepMind (now part of Google). I’ve also tried to understand the perspective of both big camps within A.I.’s sphere of influence; Elon Musk & co. who are predicting the very downfall of our species as part of the rise of the machines & the Larry Paige’s of the world who embrace & are trying to steer our hyper intelligent future machines. Steve Wozniak even joked that he started feeding his dog fillet because there’s a version of the future where he is a robot’s pet. Treat your pets like you want to be treated when you are one.

Sam Altman, the head of Y Combinator had once decribed the accelerated growth curve of any innovation as standing on a growth curve. Behind you is a flat line because everything already done is done. In front of you, though, is what feels like a 90-degree angle; infinite difficulty. But you’ve no idea where on the curve you are because your view of what’s behind & in front of you never really changes. I reckon that’s about as good an analogy as we have for what’s going on in A.I. these days. We don’t really know where we are on the A.I. pursuit for true intellect from software.

Defendants of A.I. will argue that Musk & co. are being hysterical. No one designing some form of A.I. will, first off, design A.I. to act like a human. Humans are flawed in their design & logic processing. Moreover, no one would design A.I. with a fundamental desire to survive at all costs. The other side of the coin is that A.I. with a given function will try to survive at all costs. Dead/switched off A.I. can’t complete their given task, so they’ll attempt to not be dead. Fundamentally, logic kicks in at some point.

I’m not with Musk on this one. I don’t think we should be obsessing over the issues that may arise from A.I. Humans didn’t design the seatbelt before the car. Mistakes will be made, for sure. Someone’s going to build self-iterating, self-improving intellect in software that then gets loaded into a rocket carrying something harmful or explosive (or both). As humans we have to accept that almost anything we invent as a species will eventually be used for bad things.

One really bizarre example of this is the situation where sexual abuse isn’t cured from society, but instead it’s dealt with by building intelligent robots that take said abuse from wouldbe criminals. Is this ethical? Could the robots be making the situation worse? But at the very least, we’re keeping the criminals away from human victims, right?

A more commonplace example of A.I. programming getting weird is the fact that a self-driving car will, at some stage, need to decide who dies. A car careening towards a truck with children on the side of the road in the only space to veer away from the truck will need to decide where to go. Does it kill the driver or the children? This is a debate that I have no doubt is raging right now in the Tesla offices.

And that’s the most bizarre point in all of this. Musk is very much so against A.I. being part of our future for fear of it taking over and making us the pets. But Musk has arguably shipped A.I. to more people in a meaningful way than any other entrepreneur alive today.

I’m excited by the utility of A.I. in our future. But even more so, I’m excited for the problems we can solve as we pursue a future where software can be self-aware and potentially self-improving. Maybe I’m too optimistic.

For now, though, I’ll watch from afar with wide eyed interest in what’s going on in the A.I. space. I’d love to get involved at some point. It’s fascinating even to think of what the whiteboard ideas are behind closed doors.