During the recent Consumer Goods Forum global summit here in Cape Town, I had the opportunity to briefly chat with Vivienne about some of the issues confronting the digital disruption of this industry sector. [The original transcript has been edited for clarity and space.]
Named one of 10 Women to Watch in Tech in 2013 by Inc. Magazine, Vivienne Ming is a theoretical neuroscientist, technologist and entrepreneur. She co-founded Socos, where machine learning and cognitive neuroscience combine to maximize students’ life outcomes. Vivienne is a visiting scholar at UC Berkeley’s Redwood Center for Theoretical Neuroscience, where she pursues her research in neuroprosthetics. In her free time, Vivienne has developed a predictive model of diabetes to better manage the glucose levels of her diabetic son and systems to predict manic episodes in bipolar suffers. She sits on the boards of StartOut, The Palm Center, Emozia, and the Bay Area Rainbow Daycamp, and is an advisor to Credit Suisse, Cornerstone Capital, and BayesImpact. Dr. Ming also speaks frequently on issues of LGBT inclusion and gender in technology. Vivienne lives in Berkeley, CA, with her wife (and co-founder) and their two children.
Every once in a while I have the opportunity to discuss wide-ranging topics with an intellect that stimulates, is passionate and really cares about the bigger picture. Those opportunities are more rare than one would think. Although set in a somewhat unexpected venue (the elite innards of consumer capitalism) her observations on the inescapable disruption that the new wave of modern technologies are prescient and thoughtful. – Ed
Ed: In a continent where there is a large focus on putting people to work, how do you see the challenges and disruptions resulting from AI, robotics, IoT, VR and other technologies playing out? These technologies, as did other disruptive technologies before them, tend to replace human workers with machine processes.
Vivienne: There is almost no domain in which artificial intelligence (AI), machine learning and automation will not have a profound and positive impact. Medicine, farming, transportation, etc. will all benefit. There will be a huge impact on human potential, and human work will change. I think this is inevitable, that we are well on the way to this AI-enabled future. The economic incentives to push in this direction are far too strong. But we need social institutions to keep pace with this change.
We need to be building people in as a sophisticated way as we are building our technology infrastructure. There is today a large and significant business sector in educational technology: Microsoft, Apple, Google, Facebook all have serious interest. But this current focus really is just an amplifier for existing paradigms, helping hyper-competitive moms over-prep their kids for standardized testing… which predicts nothing at all about anyone’s actual life outcome.
Whether you get into Princeton vs Brown, or didn’t get into MIT, is not really going to affect your life track all that much. Whereas the transformation that comes from making even a more modest, but broad-scale difference in lives, is huge. Let’s take right here: South Africa is probably one of the perfect examples, maybe along with India, of a region in which to make a difference.
Because of the history, we have a society where there is a starting point of a pretty dramatic inequality of education and preparedness. But, you have an infrastructure. That same history did leave you with a strong infrastructure. Change a child’s life in Denmark, for example, and you probably haven’t made that enormous an impact. You do it in Haiti and the best you might hope for is they might move to somewhere that they might live out a fruitful and more productive life. While it may sound judgmental on Haiti it’s just a fact right now: there’s only so much that one can achieve there as there is so little infrastructure. But you do that here in South Africa, in the townships, or in the slums of Mumbai – and you can have a profound difference in that person’s life. This is because there is an infrastructure to capture that life and do something with it.
In terms of educational technology, doubling down on traditional approaches with AI, bringing computational aids into the classroom, using algorithms to better prepare students for testing… we have not found, either in the literature or our own research with a 122 million person database that this makes any difference to one’s life outcome.
People that do this, that go to great colleges, do often have productive and creative lives… but not for those reasons. All of their life results are outgrowths of latent qualities. General cognitive ability, our meta-cognition problem solving, our creativity, our emotion regulation, our mindset: these are the things that we find are actually predictive of one’s life outcome.
These qualities are hard to teach. We tend to absorb and learn these from a lifetime of modeling of others, of human observation and interaction. So I tend to take a very human perspective on technology. What is the minimum, as someone that builds technology – AI in particular – that can deliver those qualities into people’s lives. If we want to really be effective with this technology, then it must be simple. Simple to deploy and simple to use. Currently, a text-based system is appropriate. Today we use SMS – although it’s a hugely regressive system that is expensive. To reach 1 million kids each year it costs about $5 million per year. To reach that same number of kids using WhatsApp or a similar platform costs about $40 per year. The difference is obscene… The one technology (SMS) that has the farthest reach around the world is severely dis-incentivized… but we’re doing it anyway!
When I’m building a fancy AI, there’s no reason to pair that with an elaborate user interface, there’s no reason I need to force you to buy our testing solution that will collect tons of data, etc. It can, and should, be the simplest interface possible.
Let me give an example with the following narrative: I pick up my daughter each day after school (she’s five) and she immediately starts sharing with me via pictures. That’s how she interacts. She sits with her friends and draws pictures. The first thing she does is show me what she’s drawn that day. I snap a photo to share with her grandmother, and at the same time I cc: MUSE (the AI system we’ve built). The image comes to us, our deep neural network starts analyzing it.
Then I go pick up my son. Much like me, he like to talk. He loves to tell stories. We can’t upload audio via SMS (prohibitively expensive) but easily done with an app. Hit a button, record 30 sec of his story, or grab a few minutes of us talking to each other. Again, that is captured by the deep neural networks within MUSE and analyzed. Some of this AI could be done with ‘off the shelf’ applications such as available from Google, IBM, etc. Still very sophisticated software, but it’s out there.
The problem with this method is that data about a little kid is now outside our protected system. That’s a problem. In some countries, India or China for example, parents are so desperate for their children to improve in education that they will do almost anything, but in the US everyone’s suspicious. The only sure-fire way to kill a company is to do something bad with data in health or education. So MUSE is entirely self-contained.
Once we have the data and the analysis, we combine that with a system that asks a single question each day. The text-based question and answer is the only requirement (from a participating parent using our system); the image and audio is optional. What the system is actually doing is predicting every parent’s answer to these thousands of questions, every day. This is a machine learning technology known as ‘active learning’. We came up with our own variant, and when it does its predictions, it then says, “If I knew the true answer to one question, which one would provide the biggest information gain?”
This can be interpreted in different ways. Shannon information (for the very wonky people reading this), or which one question will cause the most other questions to be generated. So we ask that one question. The system can select the single most information question to ask that day. We then do a very immodest thing: predicting these kids’ life outcomes. But that is problematic. Not only our research, but others as well, have shown almost unequivocally that sharing this information produces a negative outcome. Turns out that the best thing, we believe, that can be done with this data is to use it to ask a predictive question for the advancement of the child’s learning.
That can lead to a positive transformation of potential outcome. Prior systems have shown that just the daily reminder to a parent to perform a specific activity with their child is beneficial – in our case, with all this data and analysis, our system can ask the one question that we predict will be the most valuable thing for your child on that day.
Now that prediction incorporates more than you might think. The first most important thing: that the parent actually does it. That’s easy to determine: either they did or they didn’t. So we need methods to engage with the parent. The second thing is to attempt to determine how effective our predictive model is for that child. Remember, we’re not literally predicting how long a child will live – we’re predicting how ‘gritty’ they will be (as in the research from Angela Duckworth), do they have more of a growth or fixed mindset (Carol Dweck and others), what will be their working memory span, etc.
Turns out there are dozens and dozens of constructs that people have directly shown are strongly predictive of life outcomes. Our aim is to maximize these qualities in the kids that make use of our system. In terms of our predictive models, think of this more in an actuarial sense: on average, given everything we know about a particular kid, he is more likely to live 5 years longer, she is more likely to go 2 years further in their education, etc. The important thing, our goal, is for none of it to come true, no matter how positive the prediction is. We believe it can always be more. Everyone can be amazing. This may sound like a line, but quite frankly if you don’t believe that you shouldn’t be an educator.
Unfortunately, the education system is full of people that think people can’t change and that it’s not worth the effort… what would South Africa be if this level of positivity was embedded in the education system? I’m sure that it’s not lost on the population here (for instance all the young [mostly African] people serving this event) what their opportunities could be if they could really join the creative class. Unfortunately there are political and policy issues that come into play here, it’s not just a machine learning issue. But I can say the difference would be dramatic.
We did the following analysis in the United States: if we simply took the kind of things we do with MUSE, and were able to scale that to every kid in the USA (50 million) and had started this 25 years ago, what would the net effect on the US economy be? We didn’t do broad strokes and wishful thinking – we modeled based on actual research (do this little proactive change when kids are young, then observe that in 25 years they are earning 25% more and have better health outcomes, etc.). We took that actual research, and modeled it out, region by region in the USA; demographics, everything. We found that after 25 years we would have added somewhere between $1.3-1.8 trillion to the US economy. That’s huge.
The challenge is how do you scale that out to really large numbers of kids, particularly in India, China, Africa, etc.? That’s where technology comes in.
Who’s invested in a kid’s life? We use the generic term ‘caregiver’ – because in many kid’s lives there isn’t a parent, or only one parent, a grandparent, a foster parent, etc. At any given moment in a kid’s life, hopefully there are at least two pivotal people: a caregiver and a teacher. Instead of trying to replace them with an AI, what if we empower them? What if we gave them a superpower? That’s the spirit of what we’re trying to do.
MUSE is comprised of eight completely independent highly sophisticated machine learning systems, along with integration, data, analytics and interface layers. These systems are analyzing the images, the audio, producing the questions, making the predictions. We use what’s termed a ‘deep reinforcement learning model’ – a very similar concept, at least at a high level, to Google’s “Alpha Go” AI system. This type of system can learn to play highly complex games (Go, video games, etc.) – a fundamentally different type of intelligence than IBM’s older chess-playing programs. This new type of AI actually learns how to play the game itself, as opposed to selecting procedures that have already been programmed into it.
With MUSE, essentially we are designing activities for parents to do with their children that are unique to that child and designed to provide maximum learning stimulus at that point in time. We are also designing a similar structure for teachers to do with their students, for students to do with themselves as they get older. In a similar fashion, we are involved in the workplace: the same system that can help parents get the most out of their kids, that can help teachers get the most out of their students, can also help managers get the most out of their employees. I’ve done a lot of work in labor economics, talent management, etc. – managing people is hard and most aren’t good at it.
Our approach tends to be, “Do a good job and I’ll give you a bonus, do a bad job and I’ll fire you.” We try to be more human than that, but – certainly if you’ve ever been involved in sales – that’s the game! In the TED talk that was just published we showed that methodology was actually a negative predictor of outcomes. In the workplace, your best people are not incentivized by these archaic methodologies, but are rather endogenously motivated. In fact, the research shows that the more you artificially incentivize workers, the more poorly they perform, at least in the medium to long term.
Wow! That is directly in contradiction to how we structure our businesses, our educational systems, our societies in general. It’s really hard to gain these insights if you can’t do a deep analysis of 200,000 salespeople, or 100,000 software developers like we were able to do. Ultimately the massive database of 122 million people that we built at Gild allows a scale of research and analysis that is unprecedented. That scale, and the capability of deep machine learning allows us to factor tens of thousands of variables as a basis for our predictive engines.
I just love this space – of combining human potential with the capabilities of artificial intelligence. I’ve never built a totally autonomous system. Everything I’ve built is about helping a parent, helping a doctor, helping a teacher, helping a manager do better. This may come from my one remaining academic interest: cognitive neural prosthetics. [a versatile method for assisting paralyzed patients and patients with amputations, recording the cognitive state of the subject, rather than signals strictly related to motor execution or sensation] Do I believe that literally jamming things in your brain can make you smarter? Unambiguously yes! I accept we won’t be doing this tomorrow… there aren’t that many volunteers for elective brain surgery… but with amazing technologies, such as neural dust being developed at Lawrence UC Berkley Labs, as well as ECoG, which is nearly ubiquitous but is currently only used during brain surgery for epilepsy.
What can be done with ECoG is amazing! I can tell what you’re subvocalizing, what you’re looking at, I can track your decision process, your emotional state, etc. Now, that is scary, and it should be. We should never shy away from that – but the potential is awesome.
Part of my response to the general ‘AI conundrum’ is, “Let’s beat them to the punch – why wait for them [AI machines] to become super-intelligent – why don’t we do it?” But then this becomes a human story as well. Is intelligence a commodity? Is it a function of how much I can buy? Or is it a human right, like a vaccine? I don’t think these things will ever become ubiquitous or completely free, but whoever gets their first, much like profound eutrophics [accelerated development] and other intelligence-building technologies, will enjoy a huge first-mover advantage. This could be a singularity moment: where potentially we have a small population of super-intelligent people. What happens after that?
I know we started from a not-simple question, but at least a very immediate and realized one: What are the human implications of these sorts of technologies today – into what I think 20 or 30 years from now will fundamentally change the definition of what it means to be human. That’s not a very long time period. But ultimately that’s the thing – technology changes fast. Nowadays people say that technology is changing faster than culture. We need cultural institutions to, if not keep up with the pace of change of technology, change and adapt at a much, much faster pace. We simply cannot accept that these things will figure themselves out over the next 20 years… I mean 20 years is how long it takes to grow a new person – and then it will too late.
It’s like the ice melts in Antarctica: ignoring the problem is leading to potentially catastrophic consequences. The same is true of AI development – this could be catastrophic for Africa, even for America or Europe. But the potential for a good outcome is so enormous – if we react in time. This isn’t the same story as climate change, it isn’t a huge amount of cost just to keep it from being cataclysmic. What I’m saying here is these costs (for human-integrated AI) pay off, they pay back. We’re talking about a much better world. The hard part is getting people to think that it’s worth investing in other peoples’ kids. That’s a bit of an ugly reality, but it’s the truth.
My approach has been: if we can scale out some sophisticated AIs and deliver them in ways that even if not truly free, but can be done at low enough costs that this can be done philanthropically, then that’s what we’ll do.
Ed: I really appreciate your comments. You went a good way to defining what you meant by ‘Augmented Intelligence’. I had a sense of what you meant by that but this was a most informative journey.
Vivienne: Thank you. It’s interesting – 10 years ago if you’d asked me about cognitive neural prosthetics, cybernetics, cyborgs… I would have said it’s 50 years away. So now I’ve trimmed more than 10 years off that estimate. Back then, as an academic, I thought, “Ok, what can I do today?” I don’t have access to a brain directly, can I leverage technology somehow to achieve indirect access to a brain? What could we do with Google Glass? What could we do with inferential technologies online? I know I’m not the only person that’s had an idea like this before. My very first startup, we were thinking of “Google Now” long before Google Now came along. The vision was even more aggressive:
“You’re walking down the street, and the system remembers that you read an article 40 days ago about a restaurant and it really piqued your interest. How did it know that? Because it’s modeling you. It’s effectively simulating you. Your emotional responses. It’s reading the article at the same time you’re reading the article. It’s tracking your responses, but it’s also simulating you, like a true cognitive system. [An aside: I’ll echo what many others have said – that IBM’s Cognitive Computing is not really cognitive computing… but such a thing does really exist].
So I’m walking down the street, and the system pings me and says, “You know that restaurant you were interested in? There’s an open table right now, it’s three blocks away, your calendar’s clear and I’ve just made a reservation for you.” Because the system knew, it didn’t need to ask, that you’d say yes.
Now, that’s really ambitious, especially since I was thinking about this ten years ago, but it’s not ambitious in the sense that it can be done, it’s more ambitious about the infrastructure. Where do you get the data from? What kind of processing can you do? I think the infrastructure problem is becoming less and less of one today and that’s where we are seeing many changes.
You brought up the issue of a “Marketplace of Things” [n.b. Ed and Vivienne had a short exchange leading in to this interview regarding IoT and the perspective that localized data/intelligence exchange would dramatically lower bandwidth requirements for upstream delivery, lower system latency, and provide superior results.] and brought up the issue of bandwidth: wouldn’t it better if every light bulb, every camera, every microphone locally processed information, and then only sent off things that were actually interesting, informative. But didn’t just send it off to a single server – it’s every day at the InfoNYSE: “I’ve got some interesting emotion data on three users in this room, anyone interested in that?”
These transactions won’t necessarily be traditional monetary transactions, possibly just data transactions. “I will trade this information for some data about your users’ interests”, or for future data about how your users responded to this information that I’m providing.
As much as I think the word ‘futurist’ is a bit overused or diffuse, I do admit to thinking about the future and what’s possible. I’ve got a room full of independent processing units that are all talking to each other… I’ve kind of got a brain in that room. I’m actually pretty skeptical of ‘general AI’ as traditionally defined. You know, like I’m going to sit down and have a conversation with this AI entity. [laughs] I think we’ll know that we’ve achieved true general AI when this entity no longer introspects, when it no longer understands its own actions – i.e. when it becomes like us.
I do think general artificial intelligence is possible but it’s going to kind of like a whole building ‘turning on’ – it won’t be having a conversation with us, it will much more like our brain. I like to use this metaphor: “Our brains are like a wildly dysfunctional democracy with all of these circuits voting for different outcomes, but it’s an unequal democracy, as the votes carry different weights.” But remember that we only get to see a tiny segment of those votes: only a very small portion of that process ever comes to our conscious awareness. We do a much better job of post-hoc explaining the ‘votes’ and just making things happen than actually explaining it in the moment.
Another metaphor I use is from the movie “Inside Out”: except instead of a bunch of cutesy emotions embodied, imagine a room full of really crotchety old economists that hate each other and hold wildly differing opinions.
Ed: “Oh, you mean the Fed!”
Vivienne: “Yes! Imagine the Fed of your head” This is actually not a bad model of our cognitive process. In many ways we show a lot of near optimality, near perfect rationality in our decision making, once you understand all the inputs to our decision making process. And yet we can wildly fluctuate between different decisions. The classic is having people playing a betting game where if you bet, and you reveal they won, they will play again. If you bet, and they reveal they lost, they will play again, but if you bet and don’t reveal – they will be less likely to play again.
Which at one level is irrational, but we hold these weird and competing ideas in our head and these votes take place on a regular basis. It gets really complex: modeling cognition. But if you really want to understand people, that’s the way to do it.
This may have been a long and somewhat belabored answer to your original question regarding augmented intelligence, but the heart of it for me all started with, “What could we do if we really understood someone?” I wanted it to be, “I really understand you because I’m in your brain.” But, lacking that immediate capability, what can I infer about someone, and then what can I feed back to them to make them better?
Now, “better” may be a pretty loose and broad definition, but I’m comfortable with that if I can make people “grittier”, if I can improve their working memory span, if I can improve their ability to regulate their own emotions – not turn their emotions off, not pump them full of Ritalin, but to be aware of how their emotions impact their decision making. That leaves the person free to decide what to do with them. And that’s a world I would be pretty happy with.
I would surely disagree with how a great many people use their lives, even so empowered, but it’s a point of faith for me: I’m a hard numbers scientist, not a religious person, but there’s one point of faith. If we could improve these qualities for everyone, the world would be a better place.
Ed: Going back to the conference that we’re both attending (Consumer Goods) how can this idea of augmented intelligence and what I would call an ‘intelligent surface’ of total our environment (whether enabled by IoT, social media feedback, GoogleNow, etc) help turn the consumer ecosystem on its end and truly make it ‘consumer-centric’? By that I mean actually being in control of what goods and services are invented, let alone sold, to us. Why should firms waste time making and selling us stuff that we don’t want or need, or stuff that is bad for us?
Vivienne: There’s a couple of different ideas that come to me. One is something I often recommend in regards to talent. While your questions pertains to external customers in regards to retailers/suppliers, an analogy can be drawn to the internal interaction between employees and the firms for which they work: “Companies need to stop trying to align their employees with their business, they need to figure out how to align their business with their employees.”
This doesn’t mean that their business becomes some quixotic thing that is malleable and changeable; you do have a business that produces goods or services. For instance, let’s say your business is Campbell’s Soup – you produce food and ship it around the world. But why does this matter to Ann, Shaniqua, any other of your employees? While this may sound a bit ‘self-helpy’ or ‘business-guru’ it’s actually a big part of my philosophy: Think about the things I’ve said about education: Let’s do this crazy thing – think about what the true outcome we’re after is. I want happy, healthy, productive people – and society will reap the benefits. That is my lone definition of education. Anything else is just details.
I’m telling you I can predict those three things. Therefore any decision I make, right here in the moment, I can align against those three goals. So… should I teach this person some concept in geometry right now? Or how should I teach that concept? How does that align with those three goals?
“My four year old is still not reading, should I panic?” How does that align with those three goals? For a child like that, is that predictive of those three things? For some kids, that might be problematic, and it might be time for some kind of intervention. For others, turns out it’s not predictive at all. I didn’t do well in high school. What I didn’t do there, I did in spades in college… and then flunked completely out. After a big gap in my life I went back to college and did my entire undergraduate degree in one year – with perfect scores. Same person, same place, same things. It wasn’t what I was doing (to borrow a phrase from someone else) – it was why I was doing it.
So figuring out why it suddenly mattered to me at that time was me figuring out that it coalesced around the idea of maximizing human potential. Suddenly it had purpose, I was doing things for a reason.
So now we’re talking about doing this inside of companies, with their employees. Figuring out why your company matters to this employee. You want them to be productive – bonuses aren’t the way to do it. Pay them enough that they feel valued, and then figure out why this is important to them. And true enough, for some people that reason might be money – but to others not.
So what does that mean for our consumer relationship? My big fear is that when CEOs or CMOs hear this (human perception modeling, etc. as is used in AI development) they think, “Oh, let’s figure out why people will buy our products!” When I hear about ‘brain hacks’ I don’t think of sales or marketing, I worry about the food scientists figuring out the perfect ‘sweet spot’ of sodium, fat and carbohydrates in order to make this food maximally addictive (in a soft sense). I’m not talking about that kind of alignment. I’m saying, “What is your long term goal?”
Every one of those people on stage (at the Consumer Goods Forum) made some very impassioned speeches about how it’s about the health of consumers, their well-being, the good of society, it’s about jobs, etc. It’s shocking how bad a reputation those same firms have, at least in the USA, along those same dimensions – if that’s what they truly care about. And yet their response to the above statement is, “Gosh, we need a better branding campaign!”
Well… no, you firms are probably not nearly as aligned around those positive outcomes as you think you are; I believe you feel that way, and that you feel abused in our assumptions that you are not (acting that way). I do a tremendous amount of work and advice in the area of discrimination, in human capital. You know, bias, discrimination… it’s not done by villains, it’s done by humans.
Ed: I think what’s difficult is that for true authenticity to be evident, to really act in an authentic manner, one must be able to be self-aware. It’s rare to find that brutal self-analysis, self-questioning, self-awareness. You have pointed out that many business leaders truly believe their hype, their marketing positions – whether there is any real accuracy in their positions or accuracy.
Vivienne: I just wrote an op-ed for the Financial Times, “The Neuro-Economics of Inequality” (not it’s actual title but it’s the way I think about the issue). What happens when someone learns, really legitimately learns rationally that their hard work will not pay off. Not the way that, for example, it will for the American white kid down the street. So why bother? Even for a woman; so I’ve got a fancy degree, a great college education: I’m going to have to work twice as hard as the man to get the same pay, the same reward.. and even then I’m never going to make it to the C-suite anyway. If I actually do get there, I’m going to have to be “that” kind of executive once I’m there… I’d rather just be a mom.
These people are not opting out, they are making rational decisions. You talk to economists… we went through this and did the research. We could prove the ‘cost of being named José in the tech industry’, the ‘cost of being black on Wall St.’ – this completely changes some of these equations when you take that into account. So, bringing this back to consumers, I don’t have ready answers for it as I’m a bit dismissive of it. “Consumerism” – that’s a bad word, isn’t it?
While I’m not sure of the resonance of this thought, what if you could take the idea that I’m talking about – these big predictions, Bayesian models that are giving you probability distributions over this potential consumers outcomes. Not ten minutes from now, or rather ten minutes from now is only part of what I’m talking about. We’re integrating across the probability distribution of all potential life outcomes from something as minor as “they ate your bag of potato chips.”
I’m willing to bet if you had to ‘own’, in some sense, at least morally if nothing else, the consequence of knowing the short-term benefit: nice little hedonic short term increase in happiness; mid-term benefit: decrease in eudaimonic happiness; long term decrease in liver function and so forth… your outlook might be different. If you’re (brand X) that’s just an externality. So I think there are some legitimate criticisms: why talk a fancy game, it’s just corporate responsibility.
Yes, optimizing your supply chain, reducing food waste is nice, but it’s really just because you spent money moving food around the world, some of which got wasted – you want to cut back on that. Beyond that, my observation as an outsider to this sector is that it’s about corporate responsibility, and by that I mean the marketing practices. If you really want to put your heart where your mouth is then take ownership of the long term outcomes. Think about what it means for a nine-year old to eat potato chips. Certain ‘health food’ enterprises have made a lot of money out of this idea, providing a healthy site in which to shop. Certainly, in comparison to a corner store in a disenfranchised neighborhood in the US they are a wildly healthy choice, but even these health shops have an entire aisle dedicated to potato chips. They’re just organic potato chips that cost three times as much. I buy them every now and then. I’m a firm believer in eating well, just eating with some degree of moderation.
That would be my approach. My approach in talent, in health, in education and a variety of domains in policy-making has been let’s leverage some amazing technology to make these seemingly miraculous predictions (which they’re not, they are really not even predictions but actuarial distributions). But these still inform us.
Right now, with this consumer, we’re balancing a number of things: revenue, sustainability, even the somewhat morbid sustainability of our consumer base; we’re balancing our brand. What’s the one action we could take right now as an organization in respect to this person that could maximize all of those things? Given their history, it’s hard to believe that it’s going to be something more than revenue, or at least something that’s going to actually cost them. If I actually believed they would be willing to take this kind of technology and apply it in a truly positive way – I’d just give it to them.
I mean, what a phenomenal human good it would be if some rather simple machine learning could help them actually have a really different paradigm of ‘consuming’. What if every brand could become your best friend, and do what’s in your best interest? Although as it reflects from the brand-owner’s perspective. Yeah, that’s pretty hopeful to think that could actually happen, but do I think that could happen?
That’s what we’re hoping for in some of our mental health work. By being able to make these predictions we’re not just hoping to intervene on behalf of the sufferer, but trusted confidants as well. The way I often put it is: I would love it if our system could be everything your best friend is, but even more vigilant. What would your best friend do if they recognized the early signs of a manic episode coming on? Can we deliver that two weeks earlier and never miss the signals?
Going back, I just don’t see where big consumer companies own that responsibility. But let me pull back to my ‘Marketplace of Things’ idea. There’s a crucial aspect here: that of agents. I can have my own proxy, my own agent that can represent me. In that context, then these consumer companies can serve their own goals. I think they do have some goal in me being alive, so they can continue to earn out my customer lifetime value as a function of my lifetime. They have some value attached to me spending money in certain ways that are more sustainable, that are better for their infrastructure, etc.
I think in all those areas they could take the kinds of methodologies I’m describing and apply them in a kind of AI/machine learning. On my side, if I’m proxied by own agent – well then we can just negotiate. My agent’s goal is really to model out my health, happiness and productivity. It’s constantly seeking to maximize those in the near, medium and long term. So, it walks into a room and says, “All right, let’s have a negotiation.” Clearly, this can’t be done by people, as it all needs to happen nearly instantaneously.
I don’t think the cost of these solutions will drop low enough that we’ll literally be putting them into bags of potato chips. Firstly we must imagine changes in the infrastructure. Part of paying for shelf space in a supermarket won’t be just paying for the physical shelf space, it will be paying for putting your own agents in place on that shelf space. They’ll be relatively low cost, but probably not as disposable as something you could build into the packaging of potato chips. But simply by visiting that location, I pick up all the nutrition information I need, I can solicit information from the store about other people that are shopping (here I mean that my proxy can do all this). Then that whole system can negotiate this out, and come up with recommendations.
To me, it may seem like my phone or earpiece is simply suggesting, “How about this, how about that?” While not everyone is this way, I’m one of those people who actually enjoys going to the supermarket, feeling how it’s interacting with me in the moment. That’s something my agent can take into account as well. This becomes a story that I find more interesting. Maybe this is a set of combined interactions that takes into account various foods manufacturers, retailers – and my agent.
Today, I’m totally outside this process – I don’t get to play a role. The things I like, I just cross my fingers and hope they are in stock when I am in the store. The price that I pay: I have no participation in that whatsoever (other than choosing to purchase or not).
Another example: Kate from Facebook [in an earlier panel discussion] was telling us that Facebook gives a discount to advertisers for ads that are ‘stickier’ – that people want to see and spend more time looking at. What if I was willing to watch less enjoyable ads – if FB will share the revenue with me?
None of these are totally novel ideas, but none of them will ever come to realization if one of the fundamental sides to this negotiation never gets to participate. I’m always getting proxied by someone else. I don’t have to think that Facebook or Google are bad companies, or that Larry Page or Mark Zuckerberg are bad people for me to think that they don’t necessarily have my best interests at heart.
That would change the dynamic. But I sense that some people in the audience would see that as a loss of control, and most of them are hyper risk-averse.
Ed: As a final thought or question, in terms of the participation between consumer and producer/retailer that you have discussed, it occurs to me that perhaps one avenue that may be attractive to these companies would be along the lines of market research. Most new products or services are developed unilaterally, with perhaps some degree of ‘traditional market research’ where small focus groups are used for feedback. From the number of expensive flops in the marketplace it appears that this methodology is fraught with error. Could these methodologies of AI, of probability prediction, of agent communication, be brought to bear on this issue?
Vivienne: Interesting… brings up many new ideas. One thing that we did in the past – we’re not doing it now but we could listen in to students conversing with each other online. We actually learned the material they were studying directly from the students themselves. For example, start with a system that knows nothing about biology, it learns biology from the students talking amongst themselves – including wrong ideas about biology. What we found was when we trained the system to predict the grades that the students would receive, after new students entered the class, with new material, and new professors: we knew after one week what grade they would get at the end of the semester. We knew with greater and greater accuracy each week what questions they would get right or wrong on the final exam. Our goal in the exercise was to end all standardized testing. I mean, if we know how they are going to score on the test, why ever have a test?
Part of our outcome there was to simulate the outcome of a lecture. There’s some similarity to what you’re discussing (producing consumer goods). Lectures are costly to develop, you get one chance to deploy it each semester or quarter, limited feedback, etc. You would really like to know ahead of time if this lecture was going to useful. Before we pivoted away from this more academic aspect of education into this life outcomes type of work, we were wondering if we could give feedback on the effectiveness of a given lecture before the lecture was given.
Hey, these five students are not going to understand any of your lecture as it’s currently presented. Either they are going to need something different, or you can explore including something else, some alternative metaphors, in your discussion.
Yes, I think it’s intriguingly very possible to to run this sort of very disruptive market research. Certainly in my domain I’m already talking about this: I’m asking one question each day, and can predict everyone’s answer to thousands of questions. That’s rather profound, quite efficient. What if you had a relationship with a meaningful sample of your customers on Facebook and you could ask each of them one question a day, just like I described with my educational work. Essentially you would have a deep, insightful rolling model of your customers all the time.
You could make predictions against this model community for future products, some basic simulations for those type of experiences. I agree, this could be very appealing to these firms.