i read this, like many people did i suspect, because i like Janelle Shane's AI Weirdness blog. This book does rehash some of the material from the blog as you'd expect, but the focus is more on explaining AI in a non-technical, non-sensational, & friendly manner. Probably the people who would get the most out of it are those whose knowledge of AI begins & ends with how they're portrayed in the news & in fiction.
Review of 'You Look Like a Thing and I Love You' on 'Goodreads'
3 stars
This book is both funny and frustrating.
Toward the beginning I almost gave up on understanding the process Shane describes regarding machine learning. She does a good job of simplifying it for the uninformed and yet, I still struggled to understand the idiosyncrasies of the process computers go through to create processing "neurons" for decision making.
However, once to book moves into adversarial algorithms to learn, it became easier to understand. I love the examples of how machines are essentially looking for the easiest solutions even when it means hacking their own simulation. The hoverdog made me laugh out loud. The tipping towers might also be a nice way to get from here to there.
It often felt like teaching machines is like teaching children: "Oh, but you didn't say I couldn't solve it that way." And, "What do you mean this isn't the problem you wanted solved? You need …
This book is both funny and frustrating.
Toward the beginning I almost gave up on understanding the process Shane describes regarding machine learning. She does a good job of simplifying it for the uninformed and yet, I still struggled to understand the idiosyncrasies of the process computers go through to create processing "neurons" for decision making.
However, once to book moves into adversarial algorithms to learn, it became easier to understand. I love the examples of how machines are essentially looking for the easiest solutions even when it means hacking their own simulation. The hoverdog made me laugh out loud. The tipping towers might also be a nice way to get from here to there.
It often felt like teaching machines is like teaching children: "Oh, but you didn't say I couldn't solve it that way." And, "What do you mean this isn't the problem you wanted solved? You need to ask better questions." Since machines have no schema, no context, they often do unexpected things, just like kids. As an educator, this really emphasizes the importance of building schema when teaching kids.
I'm a bit relieved to learn that AI is likely not as advanced as corporations and nations would like us to think, and AGI is very far away.