BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

“A Citizen’s Guide To Artificial Intelligence”: A Nice Focus On The Societal Impact Of AI

Following
This article is more than 3 years old.

A Citizen’s Guide to Artificial Intelligence,” by a cast of thousands (John Zerilli, John Danaher, James Maclaurin, Colin Gavaghan, Alistair Knott, Joy Liddicoat, and Merel Noorman) is a nice high level view of some of the issues surrounding the adoption of artificial intelligence (AI). The author bios describe them as all lawyers and “philosophers” except for Noorman, and with that crowd it’s no surprise the book is much better at discussing the higher level impacts than AI itself. Luckily, there’s a whole lot more of the latter than there is the former. The real issue is they’re better at explaining things than at coming to logical conclusions. We’ll get to that, but it’s still a useful read.

The issue about understanding of AI is shown early, when they first give a nice explanation of false positives and false negatives, but then write “It’s hard to measure the performance of unsupervised learning systems because they don’t have a specific task.” As this column has repeatedly mentioned, the key use of unsupervised learning is the task of detecting anomalous behavior, especially when anomalies are sparse. The difference between supervised and unsupervised learning is in knowing what you’re looking for:

·      Supervised: “Hey, here’s attack XYZ!”

·      Unsupervised learning: “Hey, here’s this weird thing that might be an attack!”

So skim chapter one to get to the good stuff. Chapter two is about transparency, and Figure 2.1 is a nice little graphic about the types of transparency they are describing. What I really like is that “accessibility” is in the top tier. It doesn’t matter if the designers and owners of a system are claiming to be responsible and are also inspecting the results to check accuracy; if the information isn’t accessible to all parties involved in and impacted by the AI system, there’s a problem.

The one issue I have with the transparency chapter is in the section “human explanatory standards.” They seem to be claiming that since we’re hard to understand, why should we expect better from AI systems? They state, “A crucial premise of this chapter has been that standards of transparency should be applied consistently, regardless of whether we’re dealing with humans or machines.” Yes, a silly premise. We didn’t create ourselves. We’re building AI systems for the same reasons we’ve built other thing – in order to do things easier or more accurately than we can do them. Since we’re building the system, we should expect to be able to require more transparency to be built into a system.

The next three chapters are on bias, responsibility & liability, and control. They are good overviews of those issue. The control chapter is intriguing because it’s not just about us controlling the systems, but also covers issues about giving up control to systems.

Privacy is a critical issue, and chapter six is nice coverage of that. The most interesting section is on inferred data. We talk about inference engines, making inferences on the data; but the extension of that to privacy is to say there might be ethical limits to what engines should be allowed to infer. There’s the old case of a system knowing a young woman is pregnant and sending pregnancy sales pitches to her home before she had told her parents, but there are far worse situations. Consider societies that are intolerant of sexual orientation, but that can be inferred from other data. A government could use that to persecute people. There’s a wide spectrum in between those examples, and the chapter does a nice job of getting people to think about the issue.

The next chapter covers autonomy and makes some very good points. One is that humans have always challenged each other’s autonomy, but that AI and lack of laws and regulations make it far easier for governments and a few companies to remove our autonomy in much more opaque ways than have previously been available.

Algorithms in government and employment are given a good introduction in the next chapters, but with a lot of the same information seen elsewhere. The most interesting part of the back portion of the book comes in chapter ten, about oversight and regulation. There is a suggestion that, given the complexity of AI, there is logic to creating a new oversight agency for the national government. As they point out, an FDA for AI. Think of it in business terms, it’s a center of excellence in AI, able to formulate national policy for business and citizens, while also serving to help other agencies adapt the general policies to their specific oversight areas. That makes excellent sense.

No book is perfect, but I’m partially surprised that a book with so many authors attached flows as well. Then I remember they all are academics, used to research papers with multiple authors. Of course, with that many academics, the risk is always that a book will sound like a research paper. Fortunately, they seem to have escaped that problem. “A Citizen’s Guide…” is a good read to help people understand key issues in having AI make the major impact on society that it will. More people need to realize that quickly and get governments to focus on protecting people.

Follow me on Twitter or LinkedInCheck out my website