ad: Annual 2024 Now Open For Entries!
*

Will Designers Dream of Electric Sheep?

Published by

I’m baking bread. You should too. It’s fun. Creative. Somehow messy. I personally don’t follow a recipe; it’s a pretty straight-forward process. Below is a standard recipe:(1)

  1. Dissolve yeast in warm water in a large bowl. Add the sugar, 1 tablespoon of salt, oil and 3 cups flour. Beat until smooth. Stir in enough remaining flour, 1/2 cup at a time, to form a soft dough.
  2. Turn onto a floured surface; knead until smooth and elastic, 8-10 minutes. Place in a greased bowl, turning once to grease the top. Cover and let rise in a warm place until doubled, about 1-1/2 hours.
  3. Punch dough down. Turn onto a lightly floured surface; divide dough in half. Shape each into a loaf. Place in two greased loaf pans. Cover and let rise until doubled, 30-45 minutes.
  4. Bake at 375° (190 celsius in the UK) for 30-35 minutes or until golden brown and bread sounds hollow when tapped. Remove from pans to wire racks to cool. Yield: 2 loaves (16 slices each).
     


A Grain Of Salt

Now, if you read the recipe carefully you might have noticed that there are a lot of variables in there. Some measures are not precisely defined. Some others are purposely vague: is it 30 or 35 minutes in the oven? That’s a big gap. Warm water? How warm? And most crucially: what’s a tablespoon, measurement-wise? Do I ever fill it up the same quantity? Really? Every time I make bread? That’s impossible.

As per Wikipedia: a tablespoon unit of measurement varies by region: a United States tablespoon is approximately 14.8 ml (0.50 US fl oz), a United Kingdom and Canadian tablespoon is exactly 15 ml (0.51 US fl oz), and an Australian tablespoon is 20 ml (0.68 US fl oz). The capacity of the utensil (as opposed to the measurement) is not defined by law or custom and bears no particular relation to the measurement.(2)

 

Arbitrary Data

Arbitrary Data is data of unspecified value. There seem to be quite a bit of arbitrary data in my bread recipe. What about if I want an Artificial Intelligence (AI) program to take charge of the baking for me. After all, there is plenty of mass-produced bread sold every day. Can I write an AI program to mimic the way I make bread? Yes, I can. How would I go about building such conspiracy?

 

Algorithm

It might sound sexy, but there is really nothing sexy about it. An algorithm is just a set of instructions. It’s a process outline. A set of rules to be followed. To start my little AI baking project I need to write an algorithm. Let’s call it Mario, the bread-o-matic AI-powered baking machine.

 

Mario Goes To School

I would like Mario, at first, to just focus on that tablespoon measure issue. I would like Mario to truly understand, and classify, all the variables of the vague tablespoon measurement. So that the AI system can be consistent with the way salt is added, every time. In order to do so, I need to give Mario some data. Real data about how I, over time, have been adding salt with a tablespoon to the bread mix.

 

The Machine Is Learning

When my bread-o-matic AI little system digests all that data, it will continue to process more data as it comes in, every time I make the bread. This process is called Machine Learning and it is at the very core of any Artificial Intelligence program. ML, short for Machine Learning, is a field of computer science that gives computers the ability to learn without being explicitly programmed for. Sound scary isn't it? Well, in a way it is. More on this later. Let’s bake this loaf of bread.

 

*
 

Mario Is Still Pretty Dumb

It’ll take Mario a relatively short time to figure out all the variables of the tablespoon measurement. Maybe an hour to compute all the other arbitrary variables in that recipe, and then some. But. Wait! What if Michael is also baking bread? And I know Sophie bakes bread almost every day! Well then, I am going to ask them to share their baking data, and I’ll share mine with them if they want it. Michael and Sophie’s data is going to fly up to my cloud and Mario is going to insert it into its algorithm magic and enhance its learning. This will definitely make Mario much smarter about baking bread. The larger the data sets the better we can crack that vague tablespoon measurement and all the other variables, ultimately addressing multiple ways to make a loaf of bread. Now the AI machine is learning and processing data coming from multiple sources. A network of bread making aficionados is going to keep feeding Mario with very valuable data sets. And Mario is going to get scary smart on this task, by setting up a neural network for baking bread that resembles the way our brain work. This is what in AI is called Deep Learning.

 

Deep Learning Is Where The Electric Sheep Hang Out

Deep Learning is where the AI system can truly become autonomous. Things are in motion so that it keeps learning and learning, by itself. In fact, it’ll figure out all of the international tablespoon variabilities in a heartbeat. Mario bread-o-matic will then learn all the other infinitesimal variations of everything in that recipe. Warm water degrees, amount of flour down to a speck of dust, results of all the cooking time within a defined schedule, and so on, and so on. As a matter of fact, Mario is going to produce a loaf of bread for each one of these variables and combinations thereof. Millions of loafs of bread. Each one somehow slightly different if programmed to do so.

 

Abracadabra... A Black Box

When Mario is done doing all that the real sorcery happens: it will start creating its own variations of the recipe based on all the training data plus its own iterations of it. It is going to do that in ways we cannot quite decipher. I told you it can get scary.

A true artificially-intelligent system is one that can learn on its own. We're talking about neural networks from the likes of Google's DeepMind, which can make connections and reach meanings without relying on pre-defined behavioral algorithms. True AI can improve on past iterations, getting smarter and more aware, allowing it to enhance its capabilities and its knowledge.(3)

A Black-Box Machine Learning system can output medical diagnoses or creditworthiness evaluations without explaining the rationale behind the decisions. AI will be able to exceed human capabilities in certain areas by being able to consider far more information when making a decision.(4)

Elon Musk (of Tesla fame) is freaking out about this. He is not alone. Perhaps the most prominent and vocal advocate for a more transparent and ethical AI, Musk, and his OpenAI (https://openai.com/about/) non-profit research company publishes an ever-expanding library of open-source algorithms for everybody to use.

 

The Big O (That’s the letter O, like in Oh Dear, not a zero)

Data is an abstraction, and it's impossible to encapsulate everything it represents in real life. There are uncertainties.(5) AI, so far, remains very task-focused, lacking the contextual awareness we humans rely on in making decisions. And there are other limits: The Big O notation is a mathematical notation that describes the limiting behavior of a function. It’s an algorithm Oh Dear moment. The notation is used as a tool for assessing an algorithm efficiency. We need this one, big time. It’s one of the few measures to check if an algorithm is behaving badly.

While a two-year-old rascal can point at that loaf of bread on your countertop and mumble bread with a smile that makes your day, an AI system will have to be fed 40 million pictures of bread before learning to do the same. It won’t crack a smile, and if it is your lost sock on the counter, chances are is going to think it’s a loaf of bread. The algorithm big Oh Dear moment.

 

Geeks suck at PR

Technology is a lot about precision. But when it gets to communicating concepts and ideas the tech industry at large is a semantic landmine. AI is no exception. Notwithstanding all the amazing things these scientists and engineers are doing, the hyperboles and miscommunications about AI are copious. Miscontructs are everywhere.

Let’s clarify a few points: To be cognitive is not necessarily to be rational.  A statistical analysis is not a judgment. Knowledge and intelligence are two different things. And style and creativity are distant cousins. If only computer science curriculums would have more liberal arts classes in them, and vice versa, the world would be a better place.

 

Cognitivism VS Rational Processes

This silly bread-o-matic exercise of mine might be relatively innocuous as far as bread making goes. But what about if an AI system is set to decide who gets parole? Or which geographic region deserves supplies first in case of a large earthquake? Or who qualifies for a loan?

Some of the most impressive aspects of human cognition concern how we’re able to use biases to help us learn.(6)  Cognition is the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses. Rationality is anything containing quantities that are expressible, as a ratio of whole numbers. Guess which one is a computer really good at? Yup, Rationality.

 

*

 

AI Risk Factor

Calculus. Knowledge. Objectivity. Rationale. These are AI best friends. Pragmatism, Intuition, Emotions, Empathy, they are AI dating nightmares. Many in the AI world believe statistical analysis will eventually lead to a system that understands intuition. I can see how that can happen. But our value judgment is far more complex than a statistical analysis, no matter how large the data set is. Our value judgment takes into consideration empathy and sentiments, like affection or disdain for instance. Our decisions are not always ‘rational’. Just like making bread, our life is messy. Perhaps we should keep it that way.

Andrew Ng, one of the fathers of most of AI recent developments, puts it simply: “Lately the media has sometimes painted an unrealistic picture of the powers of AI. And despite AI’s breadth of impact, the types of it being deployed are still extremely limited. Almost all of AI’s recent progress is through one type, in which some input data (A) is used to quickly generate some simple response (B).” And further clarifies: “Being able to input A and output B will transform many industries. The technical term for building this  A→B software is supervised learning. A→B is far from the sentient robots that science fiction has promised us.” (7)

 

Data In Data Out

To all the sci-fi movies lovers out there: androids are not taking over yet; this is still all about data. Data still is and will remain for the foreseeable future the most crucial asset a company can hold to, and monetize on. Within limitations ideas can be replicated, so is AI software. But your data is yours to keep. Andrew Ng again is putting a good measure on AI expectations: “If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future.” (8)  It is also important to stress that since Mr. Ng wrote it, in 2016, AI has taken some big leaps forward. Things are moving fast. Very fast.

 

The Ability To Apply Knowledge

We established that an AI system can collect data and train itself to master knowledge in any specific field. It then needs to be able to formulate a decision based on that knowledge. It’s not necessarily an ‘intelligent’ output but, rather, a rational one. Knowledge and Intelligence are two different things. I can be an expert on butterflies but have the intellect of a four-year-old. I can acquire a vast amount of knowledge on a specific topic but know nothing to help me make a decision on a topic I have never been exposed to before.

Knowledge is a collection of skills and information a person can acquire through learning or experience (trials and errors). Intelligence is the ability to apply that knowledge. AI today is learning processes and methods on how to apply knowledge, so it can replace us in some tedious, complex, or repetitive tasks; or when a rational decision needs to be taken.

AI is very good at performing complex and dedicated functions very fast within a specific scenario. We, humans, are engineered to relate to millions of factors and senses before making a decision. Even in a split second decision, we process a great deal of data, some of which is emotional data and we do so within a context. AI is quite diligently trying to get there.

 

Fade To Black

If an AI system is capable of doing things that complement and, in some cases, replace our intellect to make decisions, can it then aid our creativity? Yes, it’s happening already. Can such a system become creative on its own? Not quite. Not yet. Possibly never.

 

Computational Creativity

Yes, there is such a thing. If you think Creativity is just about ‘making new things’, then I don’t even need the complexity of AI. I can write a simple program, in a variety of programming languages, to generate random pretty pictures, or sounds, or bundles of words. To what end I don’t know. There are applications out there that can generate a new logo based on a few user inputs. I personally don’t think of these systems as ‘creative’. I don’t think of these programs as ‘intelligent’ either. I think of them as engineering exercises. Shortcuts. Pretty shallow money-making schemes in some cases. These programs, either guided by some little AI or not, aren’t going to replace designers anytime soon.

 

Fade To White

Creativity is based on a conceptual underpinning, a fabric of emotional approaches, of intuition and mischiefs, of rule breaking and surprise, of novel communications patterns, all of which are really really really hard to train a machine on. AI can be a welcome addition to my toolbox but I doubt it’ll ever take my job away.

 

*

 

Shortcuts

To understand the minefield smart folks exploiting the power of AI to create valuable design get themselves into, look no further than The Grid, the AI-Powered Website Builder, as they market themselves. I truly applaud their effort, but I think the commercialization of such tools is a disturbing trend. It came as no surprise to me that, after much hype and two years in beta, the results fit in the ‘nice try’ bucket. Google “the grid web builder review” to get the lowdown. The cautionary tale here is that anytime engineers try to shortcut the acutely imaginative and deeply personal creative process, troubles arise.

 

Here Go The Electric Sheep

Scientists pursuing computational creativity need to consider that there is indeed the pure pleasure of constructing, of creating things, that instructs us along the process. They need to balance in the fact that we learn by doing, by getting our hands dirty, by sketching, by exploring; we are able to push things further, to experiment, precisely because we are actively involved in the making. Creativity is not just a cerebral function, but a visceral one too. A messy undertaking —not necessarily a logical one.

They also really do need to factor in the not so cleared-fetched notion that creativity does not end with an end-product, a ‘fetish’ per se. Perhaps they can benefit from a crash course in conceptual art, in the socio-political implications that a piece of art, or design, or advertising brings into the public discourse.

Creating a machine that makes ‘cool’ pictures won’t bring us any further than where we already are. Producing a million Van Gogh look-alike paintings, or a brand new set of Bach’s Goldberg Variations are not very creative or intelligent pursuits, just shallow exercises.

More poignantly, there seems to be a confusion between style and creativity. Style is, generally speaking, a manner of doing something; whereas Creativity is the use of imagination in the production of an artistic work. Being good at copying a style doesn’t make a person (or a system) ‘creative’, quite the opposite.

 

Where Can We Find AI Today?

In a lot of places. English is my second language, this very article, for instance, is made a bit more legible by Grammarly (https://www.grammarly.com/) an AI-driven tool with basic proofreading features.

You can find crafty AI-driven features on your phone:

  • Predictive text
  • Email classification
  • Automated calendar entries
  • Location-based app suggestions
  • Automated photo classification
  • Route suggestions
  • Voice assistants
  • Voice search
  • Voice-to-text
  • Translation apps


You can find AI in your playlist and musical suggestions: Pandora. In a Tesla car. In Siri, Alexa, Google Assistant, and other Conversational software. Facebook and other social media players use some form of AI for targeted advertising, photo tagging, and curated news feeds. You can also find nifty AI inside the Adobe software tools you use every day.

 

Sensei

Let’s look at how Adobe is using AI in their products thanks to us feeding them so much delicious usage data. Adobe AI engine is called Sensei and is, as of this writing, powering these features in the Experience Cloud:

  • Smart content tagging
  • Behavioral monitoring. Adobe Analytics proactively alerts you to anomalies and explains changes in customer behavior.
  • Budgeting. Adobe Media Optimizer balances and optimizes ad spend across channels.


Sensei is also powering features in the Creative Cloud, among others:

  • In Photoshop: Detect facial features so that you can change expression or perspective without distortion.
  • In Stock Photos: Find the perfect image faster by filtering characteristics like “depth of field” and “vivid color.”
  • In Premiere: Face expressions tracking and optical flow interpolation helps smooth out jump cuts.


AI is put to work here, wisely, where is most effective: aiding the creative process —not trying to replace it.

 

Something Is Burning

As a range of new technologies strives to enhance lifestyles and communication methods, often invading our personal space, we are witnessing clashes and push backs against some of these applications and features. AI is no exception. New technologies are like children learning to understand empathy (or the lack of thereof), the notion of fairness, the complexity of how opinions are forged, things are loved, relationships are formed and sustained, decisions are made. At times there seems to be a disconnect between the techno-utopian ideals of a few, and the lives of the rest of us.

Ultimately, it’s up to us to reject invasive technologies and gain a seat at the table to make sure they are designed to support our well being. Let’s keep things in perspective. AI has the potential to help us be better humans and more efficient designers. For years now we have been able to overcome and dismiss technological inventions that proved to be harmful, or just plain silly and un-useful. Done right, AI is shaping to be a great addition to any creative toolbox. Nobody is dreaming of electric sheep.

I better get my bread out of the oven now.

 

*

 

__________________________________

1 Basic Homemade Bread, The Taste of Home Cookbook 2006, p452
2 https://en.wikipedia.org/wiki/Tablespoon
3 R.L. Adams. https://www.forbes.com/sites/robertadams/2017/01/10/10-powerful-examples-of-artificial-intelligence-in-use-today/#1538ebff420d
4 Jack Clark. Artificial Intelligence: Teaching Machines to Think Like People. © 2017 O’Reilly Media, Inc..
5 Nathan Yau. http://flowingdata.com/2018/01/08/visualizing-the-uncertainty-in-data/
6 Jack Clark. Artificial Intelligence: Teaching Machines to Think Like People. © 2017 O’Reilly Media, Inc..
7, 8 Andrew Ng in the Harvard Business Review https://hbr.org/2016/11/what-artificial-intelligence-can-and-cant-do-right-now
 

Comments

More Leaders

*

Leaders

Inspiring Female Leaders: An Interview with RAPP CEO Gabrielle Ludzker

Gabrielle Ludzker is not just any CEO. The current head honcho at customer experience agency RAPP has spent her career breaking away from the traditional corporate CEO stereotype. and leads to inspire rule breakers. Gabby is an inspirational rule...

Posted by: Benjamin Hiorns
ad: Annual 2024 Now Open For Entries!