Podcast episode: Amir Goldberg
January 27, 2023
With Chris Potts
AI and social science, the causal revolution in economics, predictions about the impact of AI, teaching MBAs, productizing AI, and a journey from Tel Aviv to Princeton to Stanford.
Show notes
- Amir's website
- Amir on Twitter
- Computational Culture Lab
- ChatGPT
- Laura Nelson
- Bart Bonikowski
- Chris Winship
- Bernie Koch
- Treebanks
- BIG-bench
- Guido Imbens
- Endogeneity
- Susan Athey
- Cambridge Analytica
- Prediction Machines
- Speech and Language Processing
- DALL-E 2
- Midjourney
- Stable Diffusion
- Postmodernism, or, the Cultural Logic of Late Capitalism
- Turing test
- Matt Salganik
- Paul DiMaggio
Transcript
Chris Potts:Welcome everyone. My guest today is Amir Goldberg. Amir is going to provide us with a wise, detached perspective on the state of AI in the world and at Stanford. Amir is an Associate Professor of Organizational Behavior in the Stanford Graduate School of Business, where he co-directs the Computational Culture Lab, and he also has a courtesy appointment in Stanford Sociology. Amir is a student of people and organizations and culture, and he makes rich use of computational techniques, especially NLP and network-based techniques, in his research.
So Amir, welcome to the podcast. I'm hoping for your insightful perspective on AI at Stanford and indeed in the world. At Stanford, every conversation I have seems to be about AI in one way or another, and you might have a better perspective on this since I imagine you can kind of move in and out of these waters. So what's your view of AI at Stanford? What's its position in our culture? And what's your sociologist's assessment of this whole scene?
Amir Goldberg:Thank you. Well, first of all, thank you for inviting me and for this kind introduction. It's great to be in this conversation.
My impression is that not every conversation in Stanford is about AI.
Chris Potts:That's reassuring.
Amir Goldberg:Yes. However, I have to tell you that my dominant teaching occurs in the MBA program, which is the flagship program at the Graduate School of Business. And definitely ChatGPT has probably come up in every session I have taught since the beginning of this quarter, unlike other AI specific technologies or AI more generally. So clearly there's been a kind of phase transition in terms of people's fixation around this specific technology over the last few weeks.
That said, the parts of Stanford that I walk through are not only about AI. I think that there's a clear sense that AI is one of the most important topics of study, as a technological agent of change, and some people fear it. There's a lot of interest in using AI as technologies. But I don't it's the only and predominant conversation.
The worlds that I travel in – the vast majority of the faculty at the Business School are economists, or broadly speaking finance people who were trained in the economics profession. Other than that, quite a few psychologists and sociologists.
Economics is a very slow-changing field when it comes to methodology. That's very genreal in the social sciences. And there's been a lot of hesitation and skepticism towards the adoption of AI instrumentation as tools of research. It's slowly, slowly, slowly diffusing, but definitely not at the pace that new algorithms and new models are being released. So I would probably say that, amongst my peers, the vast majority are neither using AI nor studying it as a topic of research.
Chris Potts:That's really interesting. I mean, it's reassuring to me. Because I feel often, even when I'm in my Humanities cluster and we are talking about history, or film, or creative self-expression, AI is there in the background for us, as a draw for students or as something that might distort the kinds of things we ask them to write, or the way they express themselves. But in general, it just feels like it's always present in the room for us. So I guess I'm reassured that at the Business School, there are a few other centers of gravity kind of offsetting this.
Amir Goldberg:I'm old enough already to kind of notice the ebb and flow of academic fashions. I don't think that AI is an academic fashion. I do not. But when new technologies are introduced, initially there's a lot of quick land-grab. "Can I use the word embeddings, and quickly move into pick the low-hanging fruit?" So you do see that.
But I think especially amongst seasoned researchers, there's a little bit of skepticism. I once presented – this was I don't know, six, seven years ago – I presented in a workshop that was organized at Harvard by Laura Nelson and Bart Bonikowski, who are two sociologists who have been leaders in the adoption of machine learning and textual methods in sociology. And I presented a paper using topic models. So, in other technological domains, this would've been like 1950s technology by now. And I remember I was asked by Chris Winship, who's a very prominent and influential methodologist at Harvard, he was asking me, "So sometimes I feel that all of this NLP is just the new model of a car, and everybody's enjoying driving it. Can you kind of make a compelling argument why our old cars aren't doing the work for us?" So I think that's very much a common sentiment in the social sciences, which is, "Okay, you have ChatGPT, so what?" You can make also a kind of sociology of science argument, which is: the old guard has been invested in other methods, and the prominence thereof is a function of their own status in the field, so they are going to be hesitant, because this might cannibalize their abilities and undermine their status. So they also have an incentive to reject it.
Chris Potts:I find that reassuring too because I feel like within NLP, the standard mode is kind of breathless enthusiasm for the next big thing. Every time there is an announcement, I can read tweets from people saying confidently that all the old stuff is now irrelevant.
Amir Goldberg:Yeah, yeah, yeah.
Chris Potts:On the basis of one kind of set of experimental results. And sometimes they turn out to be right. But it also can be kind of self-fulfilling in the sense that if everyone is acting that way, you have this kind of headlong rush into every new thing. And it's true that old ideas get forgotten, but that's because the culture has decided to forget them, not because it was some rational process of investigation. I wish we would be a little more conservative in NLP the way my colleagues in linguistics tend to be conservative.
Amir Goldberg:Sometimes that conservatism of course can be frustrating. I mean, as someone who has used these methods, and went through the grind of trying to publish them, the amount of skepticism that you receive and the kind of level of scrutiny that you're subjected to relative to other more traditional approaches can be sometimes very frustrating.
But I think you're right. There is generally a good dynamic. Some people are incentivized to push new methods, others are incentivized to protect them.
But I think if you think about also how these different fields are organized, you can already see that in the social sciences, the incentive structure, but also the scientific ideology, is a little bit different than it is in engineering and in computer science. In the social sciences, there's a huge emphasis on understanding mechanisms. And prediction in and of itself is not seen as an objective. So the fact that you can increase the predictive ability of a certain task by X percent is in and of itself not impressive, unless this also lends some sort of insight about the mechanism underlying the phenomenon.
I think the incentives in computer science are often more oriented towards prediction. And also, on the other hand, in computer science, there's a huge culture around, "Let's just release my results, share them to the world. If other people can replicate it, or find bugs, or whatever, then they will have proven me wrong. But if they can't, they will find that the tool that I've developed is useful".
In the social sciences, people are far more protective. They sometimes invest a lot of energy in collecting data. And there aren't a lot of reputational returns for replication. You want to be the first one to introduce an idea. So I think both of these dynamics lead to more conservatism in the social sciences in terms of adopting new technologies.
Chris Potts:That's interesting. That dynamic within AI is relatively recent. And I think it comes from an actual shift in the culture toward valuing things like data sets, and models, and code, and other things like that. Where before they were less valued than papers, and therefore people had an incentive to churn out five or six papers before they made the data public. Whereas now, you get the most citations if you have a big hit data set that everyone wants to work on. And that really does bring you a lot of cultural capital. So it could change in these other fields. I'm not sure what caused it in NLP.
Amir Goldberg:It could. But that also produces another type of problem, which is that everybody converges on the same data sets.
Chris Potts:Yes.
Amir Goldberg:Actually, one of our seminar speakers recently was a sociologist from UCLA. His name is Bernie Koch. He writes about how the proliferation of dominant benchmarking data sets might be problematic from the point of view of exploring new technologies and ideas because they narrow things. Certain constraints and biases are already embedded into them.
Chris Potts:Of course. But I will say, to your colleague, it's gotten much better. When I really started in the field, there were a handful of heavily used data sets. Like the Penn Treebank is famous for having been used for 20 years as essentially the only thing people used for syntactic parsing models. Now, because of the incentive shift, and because we can be more ambitious, there are vastly more data sets than ever before. And you have these efforts like BIG-bench, for example, which assembled hundreds of tasks and now you're testing models. I don't want to say that the problem is solved, because it still is narrow. And there are still a few big winners in this space that everyone is obsessed with. But still, the trend is good, I would say.
Amir Goldberg:I agree with you that the trend is good, but I think the crux of the problem is that, irrespective of what the data sets are, there are convergences on what is the benchmark, what is the ultimate metric with which you evaluate the quality of some sort of model. And that too could lead to convergence that is narrowing and constraining, even if it's applied to multiple data sets. I think in that realm, to some extent, there's more heterogeneity in the social sciences, though we kind of disagree about what the metrics necessarily are going to be.
Chris Potts:But here, a counterpoint that I feel deeply in linguistics. We have no benchmarks in linguistics proper. It's like sociology. If you have a really far out idea that no one is going to believe when you describe it, you have almost no chance of getting a hearing because there is no objective benchmark. In NLP, if your wild idea does well in one of these benchmarks, people will listen. And that can actually create an opportunity for these wild ideas to get a hearing. Whereas I'm just guessing that sociology is very driven by people's tastes. I'm not sure whether you want to admit it, but-
Amir Goldberg:For sure. I can also tell you something interesting about the difference business sociology and economics, because I now kind of straddle these worlds. I'm definitely in the world of sociology. But sociology is far more heterogeneous methodologically, and in the types of DVs [dependent variables] that we use in our models.
Economists have significantly greater consensus on what are important outcomes. For example, firm performance, stock performance. And economists also have significantly greater convergence, especially in recent years, in light of the causality revolution in economic fields, in what are the standards of evaluating what is good or bad research And sociologists don't. So I think that leads to far more crazy theories in sociology, but to far less consensus on what are good models of the world.
Chris Potts:Right. I was anticipating that this revolution around causal inference in economics was having its own distorting effect, which is that now the smart thing to do for an eager young scientist is to look for problems where they're going to be able to have causal inferences.
Amir Goldberg:Yes.
Chris Potts:And so therefore, complicated problems where there's no simple causal story are now just going to be understudied, because you don't get to be a taste-maker and show this thing they're all seeking. Right?
Amir Goldberg:For sure. My colleague Guido Imbens won the Nobel Prize in 2021 with others, because of his important contributions to the causality movement. He was quite an outsider in the early 1990s as far as I can tell. And the causal revolution really challenged prevailing models.
If I understand correctly, the process that went back then in economics – there were a lot of very nice elegant theories – ut the Guido Imbens of the world say, "Nobody has really in a rigorous way, tested whether they're actually consistent with empirical reality." And the pendulum has shifted.
But I feel like what has happened in economics is that it has shifted too much. In economics is referred to as the endogeneity problem. The problem of mis-specified causal models is that the DV is endogenous to the independent variable. And it is known colloquially amongst many economists as the endogeneity police. Sometimes I've even heard it as the endogeneity Taliban. It's now become a very strong ideological movement, which basically says, "Unless you persuade me that your result is well-specified from a causal estimation point of view, I don't care about your research." And that, at the extreme, kind of forces you to look for the keys only under the lamppost, because that's where you have the light.
To be honest, not a lot of interesting theoretical ideas have come out of economics in the last 20, 30 years as a consequence. The biggest revolution in economics has been behavioral economics, which was entirely done by psychologists. And some of the most interesting research done in economics, in my view, is actually coming from anthropologists who have moved into economics, from historians, who are not wedded to the endogeneity ideology.
Chris Potts:I love this. So if I'm an anti-authoritarian economist and I want to make a name for myself, so I want to buck the trend of causal inference, what's the wild idea that I express now? It's that I come at this from a perspective of history or anthropology? And so I'm explicitly saying that this isn't the kind of science you thought it was? It's something else?
Amir Goldberg:I'm going to demonstrate to you. I'm going to propose a new theoretical idea. I'm going to collect data that is consistent with the idea. But I'm not going to be able to persuade you that I have fully specified the mechanism that is implied by my idea.
The problem is you get shut down. It's very difficult in economic seminars. And that kind of culture has pervaded business schools. Most of my research doesn't meet the standards of causal identification of that sort. I introduce a new idea. I immediately can see that it's not well causally specified. But nevertheless, I get pushback from people who are saying, "This is not causally specified," to which I respond. "I know. I just conceded that point."
The way I see the division of labor is, you can't do everything at the same time. You can't think of new theories, think of new ways of measuring, operationalizing your constructs (aybe these aren't even new ways. Maybe you've introduced new concepts, so nobody has even measured them before), and at the same time, also specify the model causally.
It's such a huge burden to make a logical argument about a theory, to demonstrate the validity of your empirical measurement strategy, and to get at causality. If we are to expect that of every study, then either they will be mediocre on all dimensions, or they will only be able to be good in on one dimension.
So I'm happy to say my niche in the world of social science research is to think creatively about new theories, because that I find very intellectually rewarding, and to think of how to use data and AI in creative ways to measure them. And some of the brilliant people who think very creatively about how to solve causal problems, if they are persuaded that the ideas that I introduced are worth pursuing, they will be the experts in unpacking the causal relationships and quantifying them.
Chris Potts:Well, what about this though, for a perspective from AI? That's a feel that until recently has not even paid any attention to causal inference. And the way that comes out is that it's entirely about system building. And so if you pragmatically just make progress on a task relative to some metric, it's regarded as progress. Even if you explicitly don't believe that you understand the causal mechanisms or even that they would be the right ones, for the way the model is making decisions. I think the field is going to have to reckon with this as we worry about things like trust and safety. But for now, because it's just all about correlations and performance, everyone has been uninhibited. You can study any problem. And if you do a little bit better on your dataset, you do well. That's a kind of counterpoint to the economics thing. And it's good and it's bad. Right?
Amir Goldberg:Well, of course. So it's good that different disciplines have different cultures. If the endogeneity police were policing everybody across university, we would be a police state. So it's good that we're not one. And I think that's part of the strength of academia and part of the way that universities, and especially Stanford, are purposefully built.
Stanford is a very loose federation of very independent entities. In a normal company, occasionally the CEO has a town hall or an all-hands meeting with everybody, talking about the vision and the strategy. How often do I meet with the president of the university? Not that much. How often do I meet with people in other parts of the university? Only to the extent that I'm interested. There isn't any compelling reason for me to do it. I think that's by design, because we don't want us to follow each other's fashion cycles. And so I think that's good.
But what I also saw is that – let me use a very antiquated term – in the "machine learning era", there was a huge move over the last six, seven years to, first of all, specify machine learning prediction tasks that also are causally specified. And also, to use these as novel tools, as econometric devices for understanding causal processes. My colleague Susan Athey has been a leader in that field, for example. So I think there's some cross pollination there as well.
And I think a lot of people in engineering and computer science recognize that sometimes it is really important to understand the causal mechanism. And if you want, for example, in Facebook, to understand, not just trace, the diffusion of hate speech, to understand which actions Facebook might take to subvert hate speech, you have to introduce some causal model into your modeling. So there have been outside pressures – sociopolitical pressures on platforms like Twitter and Facebook. But there's also I think a lot of intellectual influence coming from economics and the social sciences more broadly.
Chris Potts:I think that's wonderful, actually. And that relates to my comment about increasing concerns about trust and safety. The hate speech one is a lesson learned by the field I would say. Implicitly, the early work did have a causal model. If the text contains some kind of racial or ethnic slur, it is hate speech, otherwise not. And they built models that were guided by exactly that simple causal model. And they caused a lot of harm. And now I think the field is having to reckon with the fact that hate speech is more complicated linguistically and socially, and therefore the deployed tool is going to have to be much more sophisticated and thoughtful about the whole context.
Amir Goldberg:But in the process, if you think just about hate speech, I think it is very emblematic of what AI or machine learning has done, to my side, the social sciences.
To go back to your first question: how does AI manifest in Stanford, and more broadly, in the social sciences? I think very broadly speaking, you can think of basically two kind of streams.
One is, can we use AI as tools to operationalize things better than we had operationalized them in the past, or as ways of measuring things that we simply haven't been able to measure? So word embeddings would be beautiful examples of how we can measure bias and how we can measure meanings in ways that we could never do at scale hitherto.
And the other way by which AI influences social sciences is as an object of research in and of itself and thinking, "Okay, what is AI doing to society? Is AI a technology that is somehow fundamentally influencing social dynamics in ways that didn't exist before?"
There's a huge conversation that has been raging now for a decade at least, in terms of what is the influence of the emergence of social networking platforms and the feed algorithms behind them, etc., in terms of facilitating political polarization, diffusing hate speech, etc. But in the process, they enabled us to learn a lot of new things about the psychological mechanisms that lead people to be receptive to hate speech. Sadly, this has been used in very nefarious ways by very, very problematic actors in the political arena as we all know. But I think it's kind of emblematic of what AI is doing. It's both changing the society that we're living in, but also giving us tools to study sometimes very fundamental processes about human social cognition.
Chris Potts:Well, this is wonderful. I have a few things I want to pick up on there. First, the rise of large social networks like Facebook and Twitter and so forth. Did we enter into an entirely new era when those became big. Did that have a fundamental change in society?
Amir Goldberg:The jury's out on that.
Chris Potts:The jury would still be out? It feels like it changed the fabric of life at every level. So my own life was changed. And also just the way our society is structured changed.
Amir Goldberg:Okay, but then you need to ask – be more specific about the question. Surely it changed something, right? But the question is, was it a game changer with respect to political mobilization, with respect to political polarization, with respect to mental health problems? And here the jury is out, because causal estimation here is really, really difficult. We don't have multiple counterfactual worlds in which social networks were not introduced. And here's what happened contemporaneously with the emergence of these social networks. We had significant financial crises, culminating in 2008, but after a huge 30-year run of increase in inequality throughout the industrial world. We had the collapse of the Soviet Union and the end of a bipolar geopolitical regime. We started moving 70 years out from the end of the Second World War, so that the first-hand victims of 20th century fascism and industrial scale war were no longer there to tell the stories about how important democracy is.
So all of that happened at the same time. And it's very difficult to say, for example, if you think with respect to the question of polarization, is Facebook the only agent of polarization, or is it also the receding memory of the Second World War? Is it the financial crisis that has happened because of neoliberal economic policies? I think the answer is all of the above.
Chris Potts:Sure. So I can't make my first question more specific or improve it in any way. But nonetheless, I will press on. My second question, which might not make sense given the first: The large language models, are they as big a deal as the social networks? Have we entered into a qualitatively new era for AI that's going to have as profound a change as the change that happened with the internet?
Amir Goldberg:I'll have to look this up. There's a saying in Hebrew. It's from the Bible. "Only fools make prophecies." I forget. Who said it? "Prophecy is for the fools." I forget. It's going to take me a while to look it up.
So I would be a fool to make a prophecy. But nevertheless, I'm going to make a prophecy.
I think foundation models in linguistics are going to be a revolution that will dwarf social networks – dwarf its social impact.
It's going to be difficult to disentangle the other innovations, which will surely be either catalyzed by these models or are going to emerge contemporaneously with their revolution. But I have to tell you that I am strongly persuaded at this point that what we have seen over the last 30 years is only a prelude to what we're going to be seeing in the next couple of decades. And that the revolution might very well be on the magnitude of what the Industrial Revolution did to humanity, basically.
Chris Potts:That's fascinating. I'm happy to predict that there will, in the next few years, be some disastrous world event that traces to the intentional or unintentional misuse of a foundation model, to fabricate an event, or to spread a message through a network together with images and video, or whatever it is. And that will be major. I don't think society's going to wise up fast enough. That will be on the back of the social networks, right? Because if you removed all the social networks, these messages might be kind of inert. But they're going to travel fast and that will be part of it.
Amir Goldberg:Yeah. So there's definitely a whole ecology of multiple types of technologies. I mean, some of the things that have happened. Just think about the emergence of the smartphone, and the minimization of the camera, and the GPS. All of that also fueled a lot of processes that are now traveling across social networks. So there are these whole ecological complementarities that are leading to tipping points that are just impossible to predict. But I would definitely put money on your bet that something bad is going to happen.
Let's equate it to what happened over a process of a couple of centuries that is kind of packaged as the Industrial Revolution. In retrospect, the Industrial Revolution kind of catalyzed a lot of things that people would agree are very good, such as modern medicine. Before the Industrial Revolution, the vast majority of humanity worked in agriculture. Now in the United States, the number is about 1.8%, I think, or something like that.
The emergence of leisure. The emergence of mandatory education for children, and child labor was abolished. Childhood is an invention of the Industrial Revolution. Before the Industrial Revolution, children were just seen as little adults. This whole categorical distinction.
So a lot of really good things happened in the Industrial Revolution, but a lot of terrible things happened. The Holocaust, and the Second and the First World War were clearly results of the Industrial Revolution. And millions of people paid horrific prices as the economy was transitioning from an agricultural to an industrial economy. And if you just read Engels' description of Manchester of the 19th century and the squalor that people lived in. It was terrible.
So it's obvious to me that whatever we're going through right now is going to produce a lot of terrible things and a lot of good things, because change is kind of morally neutral. It just happens. It's inexorable, and we're not going to be able to stop that train on its tracks.
Chris Potts:So why is it so easy for me to think of the negatives and so hard to think of the positives? Is this a failure of imagination on my part? Am I a pessimist, or is this natural?
Amir Goldberg:Well, I think there's clear psychological evidence that suggests that we anchor more on the negative than we do on the positive. One manifestation of this is lost aversion.
Chris Potts:But I have more incentives to think of the positive. Because if I could think of one positive, I could be a rich person as a result. But I can only think of downsides and concerns.
Amir Goldberg:So that's interesting. Maybe you are a pessimist.
Chris Potts:I can think of positives. They just seem small. They don't seem on the scale of things related to the Industrial Revolution. They seem to be things that would help me with editing text and stuff like that, or help a certain segment of the artistic population do creative new things. Or help with marketing copy and other things that just seem small to me.
Amir Goldberg:Well, first of all, that seems to me a little bit of underselling what even existing technology today does. I'll get that back to that in a second. But I would say, I think first of all, we're all anchoring. We've seen disasters already, right? We've seen the Cambridge Analytica crisis. We are seeing the outcomes. We understand how fragile our social fabric and our democracy is, and we're all concerned. And if some of the outcomes are going to be that people are going to die – and I think that's also inevitable, in some indirect way – it's very difficult for us to think of any offsetting value that would offset the price of a human life.
Chris Potts:That makes sense.
Amir Goldberg:So I think it's rational for us to be concerned if we believe that this could lead to potentially millions of people suffering dramatically, if not dying. So I don't think that's irrational of you.
But going back to underselling – so I had a conversation with my father a few weeks ago. So my dad, he is a manager in a company in Tel Aviv where I grew up. He lamented in a conversation that I had with him the fact that companies have really, really – for efficiency reasons, large American conglomerates that he works with – have really gotten rid of secretaries. That was the prominent thing in the 1980s. If you are a manager, you had a secretary. And he said, "And I understand, we have tools. I can control my calendar. When I was back in the '80s, it was very complicated to control your calendar. I can interact with people over email. It was far more complicated. You needed a secretary to get hold of the person on the other side. I completely understand it."
But people are so overwhelmed. And you have very high ranking managers in large corporations, and you can't even schedule a meeting with them because they're just overwhelmed with the meetings. If there was a secretary, they could do it.
I was telling him, "Have you played with ChatGPT?" Because ChatGPT can easily schedule a meeting for you, or some tweak thereof. I mean, it's apparent to me that if you feed somebody's calendar to ChatGPT and ask it, "Can you tell me when is Chris going to be available during the next two weeks for a company retreat from 8:00 AM until 5:00?" It can do that, and it can scale.
Now if you think about it as an economist – just think of how many people were employed in 1985 as secretaries, what portion of GDP that constituted. So to me, I think you're kind of underestimating, already, the economic substitution value that's going to come from ChatGPT. And of course, it comes with a price. There are going to be millions of secretaries who will no longer be viable in the economy.
Chris Potts:That's interesting. So I think what you're saying is that I'm wrong to think of it as one big thing that's going to happen. It's going to be a huge number of maybe smaller things that will have an aggregate effect on our entire lives. That makes perfect sense to me. So calendaring sounds small. But if it's one of 100 things that changes the fabric of everyday life, the overall effect of this technology is enormous. That seems to make sense to me.
The part about all the secretaries being out of work though, does that come true? That's always what people say when you have these things happen. But isn't there also just a process of re-employment that happens, because it's not like all of those people are literally out of work right now? There are people out of work, but it's not precisely that class of people. So that also seems simplistic.
Amir Goldberg:It will. But there were two parts to your question. So I'm not sure I fully agree with your characterization of what I just said. Well, first of all, what I was trying to say is that even the low-hanging fruit that already is now available to us by ChatGPT, I felt like you were slightly underestimating its economic value.
But I think that the value is going to come not only from the aggregation of a lot of small tasks. But slowly, two things are going to happen. The ecology will emerge such that new opportunities that we haven't even realized that are very large in scale. It was impossible to imagine a social network in 1992, I think. And even when it was introduced – I'm old enough to remember that when Facebook was starting to becoming popular in 2006 or so, people were asking themselves, "Why would I use this stupid thing?" It was very difficult to imagine, because it was not clear what problem the technology was solving. People were not even educated to understand that implicit need. It wasn't implicit, but it was not productized at the time.
So I think what we will see is, initially, the aggregation of a lot of existing tasks being replaced. But not a long time after that, the convergence of these new innovations in an ecological fashion, in ways that will be game changers, and that neither you – we might have the creativity to imagine, but it's going to be very unlikely that we will be able to predict.
And I think it'll also going to make even existing tasks somewhat different. So I remember reading this tweet by somebody who said, this programmer who was writing, "Here's what I did over the Christmas break. All I did was play with ChatGPT, and here's what I discovered. ChatGPT is really useful for me to immediately hit the ground running when I want to start a new software project." If I ask it, "Can you help me write code in Python that does X, Y, Z?" Which is a task I've never done before, it's really good at giving me the foundations. And so just think of here, it's not going to replace the programmer. But it's really, really going to make programmers significantly more efficient if it can reduce a lot of the cost that they need to invest in venturing into new tasks.
This behooves you to think: the story of substitution is not just about the substitution of whole people or whole professions. It is also the substitution of tasks and sub-tasks, and the redefinition of roles. And it's not just a language of automation and substitution. What I am really searching for at this moment of time is an analytical language that will provide a taxonomy of the kind of things that these foundation models are capable of doing.
Chris Potts:The programming case is really interesting to me. There's just one subtlety I want to check in with you about. I have complete faith that this person can use ChatGPT to write programs that they already kind of know how to write. We all have this experience of not being able to remember how things work. So that if someone could just produce the program, we could verify it for ourselves, but it's tedious to write it. The verification step though, is important. And I think that you having a model do that could be great.
Having a model generate a program that I don't know how to write seems to me much more speculative and much more challenging from the point of view of engineering and software development, because now no one can validate it. I'm not sure that the validation process is actually less labor intensive than the writing from scratch.
Amir Goldberg:I agree with you. But first of all, I would predict that the vast majority of coding that's operating now in the software economy is all about very, very normal incremental reproduction of very similar tasks. Even every novel problem can be dissected into underlying known tasks. I think a lot of the returns to investment in the software development field will now be about, "Do I know how to efficiently tackle a new kind of programming challenge by very systematically figuring out what are the sub-tasks that I know, or that I can outsource to ChatGPT, or some development thereof? And where do I need to employ my own independent capabilities?" So I think that's going to change the profession itself. And we will need to have a language.
Let me say it this way. All programmers have to do these things already, and some are better than others. And I don't know, because I'm not a programmer, even though I actually, I did work as a software programmer for... But I was writing in C++, o this was like in the Stone Age. It was even before Python.
But I don't know that the software profession even has the language to make that type of distinction between tasks. And as a consequence, some programmers have just better mental models of their challenges than others. But as these things will become more prolific, the professionalization of the usage of ChatGPT in software will force us to develop a language. And it will educate people how to become more efficient in using these technologies.
Chris Potts:Right. But it's so funny because, even as you talk, I just find myself vacillating between ChatGPT of the future as auto-complete, as the analogy would be a sophisticated graphing calculator from my own youth. All the way to, "Oh, this is some kind of new programming agent that you'll pair-program with, and that even outstrips you in some capacities, and helps you be creative and solve problems in more robust ways." I know we're at the auto-complete stage. I'm not sure how far we are from the sci-fi one.
Amir Goldberg:Okay. So it's interesting that I'm playing the role of the technological optimist in our conversation. Because, very much like you, I'm extremely concerned about the potential downsides. Extremely concerned. Especially in the age that we live in, where government is weak, and regulation is weak. And the speed with which development is happening is just mind blowing.
Here's what I think is the crux of the problem. The crux of the problem is that you and I, and generally people who are studying AI as an object of study, we don't have a good enough analytical language or model to describe what these models are doing.
You use the term "auto-complete", okay. One of the most compelling arguments that I've heard around the economic analysis of machine learning came from a bunch of researchers in the University of Toronto. Oh, I have the book actually in front of me. Here it is. So it's Ajay Agrawal, Josh Gans, and Avi Goldfarb. Already a few years ago, they published a book called Prediction Machines, but it was proceeded by a couple of papers. They were talking about AI. But that was very much in an age where "AI" was simply used as a placeholder for machine learning.
As an economist, they said to themselves, "I need to understand what machine learning does. And once I have a language of what machine learning does, then I can have a model of what it's going to replace." And the way that an economist would think about it is, where is it going to reduce the cost? It's going to reduce the cost of X. What is X?
And their answer was, X is prediction. Prediction. And thinking about the most rudimentary supervised machine learning model, it predicts. Is this a cancerous tumor or is this a benign tumor? It's just a classification machine that makes a prediction about some sort of labeling task.
From that, they developed actually a very interesting economic model of saying, "Okay, let's now look at the world of work broadly construed. What do people do? And can we think about these as prediction problems?" And we will anticipate that tasks that people make that we can very easily understand as prediction problems, these are the markets, these are the industries, that are jobs that are going to be disrupted.
Let's think of a TSA agent. What is a TSA agent doing? They're making a prediction about the likelihood that the bag of this particular passenger actually has explosive in it. You can really construe their job as a prediction task. So if you believe in the Agrawal model, your anticipation would be this is a job that is ripe for the picking by machine learning algorithms.
Maybe we're not there in terms of the performance of the algorithms, in light of the fact that the cost of error here is so high, and the TSA is a very conservative entity. But I think it's very safe to say that it's very likely that, over the next few years, a lot of the tasks that are being performed by human agents in the TSA, in screening passengers who are going onto planes, are going to be replaced by prediction machines.
Chris Potts:Okay. But let me share a few thoughts about this... Oh, I didn't want to interrupt you. Go ahead, finish up.
Amir Goldberg:I just wanted to say, I think this is not enough to explain what models like ChatGPT are doing. It's not just about prediction.
Chris Potts:That's the point that I wanted to make, one of them.
Amir Goldberg:So we need a new taxonomy of tasks.
Chris Potts:Yes.
Chris Potts:And that will give us a framework to at least think about where are the parts of the economy of organizations of work that are going to be disrupted. And I think auto-complete is a little bit short selling. But I don't know, maybe I'm overly optimistic. Maybe if I think about auto-complete, maybe really the task of scheduling can be construed as an auto-complete problem.
Chris Potts:Wait, so the auto-complete comment could be regarded as trivializing it or reductionist. But I just want to grant that if you're auto-completing an entire sub-routine or an entire paragraph, it's a different game from when you were just making a next-word prediction. So I didn't mean to trivialize that. But I was saying that, right now, the easy cases are the ones where the auto-complete thing, however long it was, is one that I can verify myself. And when you start to venture into areas that truly are creative expression in a program, new kinds of programs, now you have a real problem of human verification.
But the bigger theme that I want to pull out from what you just said – I will be so bold as to say that we are leaving the era in which "AI" is synonymous with "supervised classifier".
Amir Goldberg:Agreed.
Chris Potts:It kind of was for a long time. And this perspective that you've offered is very much shaped by that moment in AI.
Amir Goldberg:Agreed. Agreed.
Chris Potts:And I think this is what you were alluding to with the fact that we have now entered into a realm of generation, free-form generation across many modalities.
Amir Goldberg:Exactly.
Chris Potts:And the other part I want to say is that I think we see a really easy path to having superhuman partnerships between people and these AI models. But the actual autonomous part – your TSA agent example is perfect. We are a long way from the autonomous part, I think. In part because we want to blame people for mistakes. In part because of systemic risk of deploying one model and having it make correlated errors across lots of different scenarios. And in part, just because the technology isn't there yet.
Amir Goldberg:Excellent. But that's because we need not think about it through the dichotomous prism of substitution. It either fully substitutes a TSA agent or not. But imagine that the algorithmic TSA agent will only surface to the human agent things that are above a certain threshold of risk. And the human agent will then be the final decider. That in and of itself is a disruption. Because if that threshold is going to be 90%, you will save 90% of personnel employed by the TSA.
My prediction here would be that also the types of tasks that the TSA agent will be doing will change. TSA agents, the human agents, will actually have more time to think creatively about what are the things that our detection devices are missing altogether, or to anticipate the creativity of the terrorists.
Chris Potts:This is like a joke that I heard someone from the DoD once say. With every successive generation of unmanned plane, it requires more people on the ground. Generation one requires 4, generation two, 6, 8, 10. They become more autonomous, but they require larger teams. It's just that they're doing different things on the ground.
Amir Goldberg:Maybe. But then if a new kind of generation requires more people, but what this technological innovation has also done is that what these people have done up until this point can now be outsourced to a machine, then you have redeployed human capacity better in the economy. You will be creating more value.
What I'm missing is a good framework to try and understand, what are these models actually doing? And can I think about those things abstractly? And you completely understood what I was trying to say, which is the language of prediction is no longer useful. It's not enough to describe what these models are doing.
Chris Potts:Just to give you a sense, so my Natural Language Understanding course, which this podcast is nominally connected to, I started teaching it in 2012. In 2012, there was absolutely no natural language generation as part of this. No generation whatsoever. It was all classification. Now, every problem is or can be cast as a generation problem.
Dan Jurafsky's famous textbook with Jim Martin famously had no chapter on generation for a very long time. And now of course, this has to be a focus of the latest edition because it has taken over. And I'm on record as saying that five years ago, I would've bet against text to image generation being anything like where it is now. And of course, we have models like DALL-E 2, and Midjourney, and Stable Diffusion that just do things that look like science fiction to me.
Amir Goldberg:But can I ask you a question? How do you define generation? Can you give me a formal definition of what generation is?
Chris Potts:A formal definition is very hard. But it's this kind of feeling that the output space is not pre-structured. Of course it is, even for a language model. That's why the technical definition is very hard. But the space of things that a language model can produce is so large that it might as well be infinite. Whereas a classifier, it has at most 100 dimensions.
That's where you get into unanticipated behaviors. It can do things that completely surprise you in a way that even 100-dimensional classifier really never could.
Then the other part that is really radically reshaping the field fast. I don't want to sound like one of these breathless enthusiasts, but it does feel like a real change, is that you could have a single frozen language model like ChatGPT learn in-context some new tasks. So you don't train a custom model, you just prompt it. And it shows emergent behaviors, new behaviors. And that really just opens the door to doing even more than you could ever have dreamt of before.
Amir Goldberg:Yes. Think, for example, about the medical profession – and how much of a primary care physician's interaction with her patients is repetitive. And actually includes some generative interaction, but a lot of prediction tasks. I'm hearing certain kind of symptoms, and I need to predict what is the most likely underlying pathology. Right?
I'm just wondering, it sounds to me like, in terms of the maturity of the technology, a lot of that interaction can now be replaced by machines. I'm not saying, again, that it should be completely substituted by machine. But a lot of the triage, a lot of the early kind of diagnosis. My MD friends are going to be upset by this suggestion.
Add to that the fact that we now have – and this is where all these complementarities come into play – we have minimized the devices that take vitals. I mean, we're still not in the Theranos phase of that technology, but it's obvious to me that the hospital, which is the product of another product of the industrial evolution, is going to be replaced by some sort of distributed entity, where people are actually going to be taken care of in their own homes. And they're going to be monitored by algorithms. And only under certain conditions, a human physician will actually intervene. And so much of medicine is going to look different. It's difficult for me to anticipate, but it doesn't sound like science fiction anymore.
Chris Potts:Good point. Yeah no, of course. And for devices and so forth, low-level decision-making, even things like how to do a suture and stuff, will probably be AI driven in a way that helps humans, replaces humans, and leads to better outcomes. It seems like, for medicine, there will always be a human element there. Maybe we could hope that AI will free them from record keeping and other things that distract them from the human part.
Amir Goldberg:Yeah. Maybe I am overly optimistic. But I have to tell you that when I start playing with ChatGPT and the quality of generation, my jaw dropped.
Chris Potts:Yeah, of course.
Amir Goldberg:I teach executives in the Executive MBA, and I teach MBAs. And almost all my classes touch at some stage or another on AI. And I always do the Turing test, and I always make the joke of, "If you spoke to Alexa recently, it is very easy for you to force it to fail the Turing test. My daughter just yesterday asked, 'Alexa, what is Ms. Smith going to teach me in class today?' And Alexa was like, 'I don't know how to answer that question'." And then ChatGPT comes in. You can break ChatGPT, but it's not that easy.
Chris Potts:Well, you certainly can. I mean, it's very fluent. And it gives average responses to average case inputs. And it gives very strange responses, often, to things that are outside of its training distribution. And that's where we might read into that as creative or insightful, but also just evidence that it doesn't have any idea what's going on.
Amir Goldberg:Yeah. I can't remember who I heard that referred to it as pastiche. Pastiche is a very interesting term. In the humanities, it's Fredric Jameson wrote an article that was then expanded into a whole book that was called Postmodernism, or, the Cultural Logic of Late Capitalism. And his argument was that pastiche was the dominant cultural logic of late capitalism. He referred to it as imitation without a source. And it's interesting that now, some have argued that this is what ChatGPT and other models of that sort is doing. I find it interesting.
Chris Potts:I've fully brought ChatGPT into my classroom this winter. I'm teaching a large undergrad course, and I've been checking on how it does on my assignment questions via prompts that I write. I reckon it's getting a C– in my class right now. And I've been showing the students the ChatGPT responses in class and saying, "Here is why it's getting a C–. This paragraph here is all wrong. I'm not going to tell you what the problem is, but you might want to read this, and then not do this." But of course, if you do read quickly over its responses, it looks like it's about an A– student. It's very fluent and writes a lot if you want it to.
Amir Goldberg:It just seems to me that the Turing test is too high a standard for us.
Chris Potts:Too high?
Amir Goldberg:Depending on what for, right? But in the continuum between a simple dichotomous prediction task and passing the Turing test, there are a lot of intermediate things that I don't know that we have a good analytical taxonomy to describe. And I think that if we were to develop that analytical taxonomy, that would give us some clarity in understanding and starting to anticipate, and also to measure how the economy is going to be impacted by this. And also give tools to productize it. I know that when you say "productize", immediately, it has a negative kind of connotation.
Chris Potts:No, not for me. No, not for me. Because it evokes for me the fact that AI technology and breakthroughs in science take us 95% of the way there. But that last 5% is what we've been talking about this entire time. And I find it to be every bit as difficult as the first 95%, for any interesting application.
Amir Goldberg:For sure.
Chris Potts:So I find it fascinating that we continually fall short of that, even when we feel like we've had a huge breakthrough.
Amir Goldberg:That's what my MBA students are trying to do, and what hopefully they're learning when they do an MBA in Stanford. The recognition that the quality of a product is not just the quality of the technology behind. It is a clear understanding of what problem it solves, for the intended audience, and what are the obstacle in delivering that solution. And that's not a thing that you and I are good at doing, because we're in the science side of it. We're not in the productization side.
Chris Potts:I kind of actually want to return to your MBA students. But can we instead just switch gears a little bit? I want to find out some more about you.
Amir Goldberg:Okay.
Chris Potts:That'll bring us to your MBA students, I think.
Amir Goldberg:Okay.
Chris Potts:I was intrigued first of all. We've known each other for a while, but I was trying to find out more about you in preparation for this. And I read at your website, the following description. You majored in computer science and film studies at Tel Aviv University. That's already intriguing. You moved on to work as a programmer, at some unknown place, and later as an IT consultant, also unnamed. Then you did a PhD in sociology at Princeton. And then I lose track of the steps. But maybe that was then direct to Stanford. And here you've been in the Business School, even though you're a sociologist, computer scientist, and film nerd. I'm not sure. So what's going on here? Tell me your story.
Amir Goldberg:I was always good at math. And the university system in Israel is very different from the American college system. First of all, it's only three years, and you need to commit to a major ahead of time. And I couldn't figure out what I wanted to do.
I went into university in the late '90s, in Tel Aviv, at the height of the dot-com boom. And in Startup nation, the second largest Silicon Valley in the world. So everybody was doing computer science. I went into computer science, but I also wanted to do something that isn't. And I thought to myself, "I love film, I love intellectualizing films." So I enrolled and I was allowed to enroll and do a double major. There was only one other guy that year who also double majored in film.
And I have to tell you, I loved both. I loved the algorithmic beauty of computer science. I loved doing a search algorithm on a graph. I thought that there was something elegant and beautiful about it. And I loved watching old Soviet films.
Then I started working as a programmer, and I hated it, because you couldn't do all the beauty. It was all about fixing bugs and making sure that it's presented nice. And I worked for a while, but I kept kind of whetting my appetite for that other side. Actually film studies, I really enjoyed it. It introduced me to Marxist theories, and post-modern theories, and different ways of thinking about the world, and analyzing texts. Thinking about the film as a text. That you analyze, that you read. It was wonderful and exciting.
I worked for a few years as a programmer. It was boring. And the company I worked for was sold. Even though I was promised to become a millionaire, I wasn't.
Then it occurred to me that, in the early 2000s, you could make use of this moment of the internet revolution to study text, and to study a film as text. It wasn't available at the time, but to study social cultural processes using computation.
So when I was doing my PhD in sociology, I self-taught Python, because Python was emerging. There was nobody to help me. None of my advisors, with the exception of Matt Salganik. I don't know if you've heard of him, but he was a young up and coming professor who had just joined, who was basically my age. He joined a year after I started, and he was one of the first people in the this revolution of computation. He also worked with Duncan Watts, etc.
Chris Potts:This is at Princeton?
Amir Goldberg:At Princeton, yeah.
Chris Potts:So you're at Princeton now. So how did a bored programmer who likes movies get into the Princeton sociology PhD program? Is it that easy?
Amir Goldberg:I don't know. I wrote an application. I wrote my statement. I got some recommendation letters. I know because my PhD advisor, his name is Paul DiMaggio, who was a huge influence on my intellectual development and is also a wonderful person. He told me, "I remember that I read your application and I knew that we should admit you because you asked interesting sociological questions, but you also brought this new computation thing."
Chris Potts:Okay.
Amir Goldberg:I didn't know that that was what I was doing. I just was intrigued. I didn't know who Paul DiMaggio was. Only after I came to Princeton, I realized that he's the third most cited living sociologist or something like that in the world. At the time, I didn't know that.
It sounds like this self-help story where you say, "Just follow your passion." I followed my passion! It really turned out nicely for me!
Chris Potts:And then direct to Stanford?
Amir Goldberg:And then I went on the job market and I came directly to Stanford.
Chris Potts:And now you're a Californian?
Amir Goldberg:I am, yeah. That's true.
Chris Potts:You self-identify as a Californian? I'm still adjusting, even though I've been here a very long time now.
Amir Goldberg:I'm also an American. It's much easier for me to self-identify as a Californian than as an American. I'm nominally an American, but I'm definitely not emotionally there yet. I think that the fact that Donald Trump was the president who swore me in when I was doing my naturalization left a bitter taste in my mouth in that dimension. I don't know what being a Californian means.
Chris Potts:Well, what's your perspective as a sociologist on these three places that played a role in your life? Tel Aviv, Princeton, and now Silicon Valley, Palo Alto, Stanford.
Amir Goldberg:Princeton as a place was not that influential. As an institution, definitely was. I never liked Princeton too much. It felt to me like a Disney version of Oxford. And in New Jersey, which is not the most appealing part of the country. And I would only go to New York all the time because it's such a fascinating place.
I had the great luck of growing up in Israel as a Jew. Not as an Arab Palestinian, because Israel is a very segregated society. But I had the great fortune of growing up in Israel.
The time that I grew up. Both my parents, my heritage is Jewish-Polish. My paternal grandparents survived the Holocaust in Poland. My parents grew up in Israel during its formative years, and my father fought in the Yom Kippur War and the Six-Day War.
So I didn't grow up through all that trauma. The trauma was kind of existed generationally. But I grew up in the 1980s when Israel was transitioning from a more socialist economy, as it was based on its founding politicians, and transitioning into a free market economy. So I enjoyed both worlds. I enjoyed the communitarianism of the socialist era, but also the free choice of an emergent free market. Before it became a vicious neoliberal kind of competitive market, and after it was an oppressive Bolshevik state. (It never was! I'm exaggerating!)
I had the freedom to explore my ideas, and to be in a country that was based on in a society that valued education. And my parents encouraged me. I recognized that I was lucky. I say this again. I don't think this opportunity was given to my Palestinian, Arab, Israeli citizen compatriots who were oppressed in some way or another, even though Israel is a democracy.
I think that was a very big part. And then Israel opened up, and globalization happened. And I thought to myself, "It's too small of a place. I need to explore America, and Europe, and taste the world." I never expected to end up in Stanford.
Chris Potts:At what point in all of this did NLP start to creep into your research methods?
Amir Goldberg:Only when I came here. So I used a lot of network analysis based type of algorithms. Think about it. I remember the Dijkstra algorithm for searching on graphs as part of my CS undergrad. Then I thought to myself, "Wait a second, I can use it to analyze networks." So that's what I did, amongst other things, in my PhD. I only moved into NLP... Actually, you were my first collaborator in that respect, when NLP was maturing in a way that it was very useful for social science. But in Stanford, I don't know. Am I citizen? Am I Californian? Am I a citizen of the Bay Area? The whole Bay Area is a weird, weird place. I find it fascinating and weird at the same time.
Chris Potts:Sure, sure. It's very unusual. But what about all these MBA students that you teach now? So you mentioned that you might not have anticipated, I'm guessing when you were back at Tel Aviv or even at Princeton, that you were teaching lots of MBA students. And so what do they want to know, and what do you teach them, and what wisdom do you have? And all that stuff.
Amir Goldberg:It took me a long time to figure that part out. At the end of the day, what they want to know is how to make money. And I don't know how to teach them that. I don't mean it in a cynical way. They come here, they invest a lot of money in the degree. Some of them go into significant debt. They're making this investment in order to further their careers.
For a long time, I made the mistake that I thought in order to be an effective teacher in the classroom, I need to be on top of things to understand how the economy exactly is structured, to know what is Elon Musk's new thing. If somebody would ask me, "Hey, what do you think about Jira?" I would be embarrassed to say I don't know if I didn't know.
Now I feel like I don't care. I don't know. You don't come to Stanford so that I can tell you about something that you can look up on Google. My role in the classroom is to... I think what I've developed over the years is the ability to think in a very precise way, and analytical way, about a problem.
So the value I give in the classroom is when a student says something – and believe me, MBA students love to do name dropping, and to bullshit their way out of a question – and if they say something, I always ask them, "Wait a second. There's an assumption embedded in what you just say. What are you assuming?"
I think I'm very good at helping them see the analytical conditions that are necessary for the statement that they had just made to be true. If they're making an argument about why this company's going to be successful, they want to make sure that the assumptions are actually consistent with the world. So that's what I think I teach them: to think. To think analytically.
Chris Potts:I really like that. And also, even if you describe their goal as making a lot of money, for them and for all of us, they're going to do that via business, not by trying to win the lottery or something. So that implies a bunch of stuff about wise decision-making, and organizational structure, and culture, and all the things that we associate with people who are successful in business, which is really very people oriented in addition to having an understanding of a market, and a product, and everything. And all that stuff seems generally useful. And I don't have any cynicism about it.
Amir Goldberg:It's very useful. First of all, companies are increasingly learning that this is a really important part of what they do. And it's part of their competitive advantage. Companies are also increasingly pressured by their investors, by other outside stakeholders, to actually be ethical in these processes. And definitely our MBA students at least appear to be committed to that, and some of them are very passionate about these ideas.
I think there's a cynical way of looking at the MBA. You can say an MBA, especially a Stanford MBA, is basically a factory of reproducing elites. It's a very elitist institution. But there's another argument that you can say, we need competent, bold, and ethical managers to introduce novelty, and value, and efficiency, and change in the market. And I think both of the arguments are true. Some days I feel like, should I be teaching MBA students, or I don't know, undergrads in sub-Saharan Africa, where maybe the impact of what I'm doing is going to be significantly greater on their lives. But I also recognize that these are not mutually exclusive things. And I hope at the end of the day that whatever they take out of my classes is going to make them better managers both in the ethical sense, but also in the value creation sense of the word.
Chris Potts:So one last question, that kind of ties it back to the themes of AI. I really believe what I said before, that research is not going to get us directly to products that are useful. And there's no reason to be cynical about the uses. We could focus on things like climate science, accessible tech – things that are just strictly good, but could be productized. It's incredibly hard to close the gap between the research and the actual productization. That's going to require people who have some business skills, people you might interact with. Do you have advice, at any level, for people who might want to team up to actually take an AI research development and turn it into something that has utility in a positive sense in the world?
Amir Goldberg:That's a good question. So let me answer this question how I answer it in a course that I teach that's called People Analytics. People analytics, I think, is a term that was coined by Google. But basically, the idea of using modern data analytics, including very broadly construed. So AI would also fall under that umbrella, in order to basically manage people inside organizations. Hiring, promotion, etc.
Often I teach this and I say, "Here's one of the biggest challenges." A big challenge is that the HR department in our traditional organization is traditionally the least data-oriented department in an organization. And if you'll bring algorithms in, first of all, a lot of the HR people will be threatened. They think that the machines are going to replace them. They won't understand them, and they won't know what to do with him.
And I think the biggest challenge, if you want to introduce people-analytical processes inside an organization, is that you need to have people in your organization who are conversant in both languages. Who understand HR, but who also understand AI. But when I say "understand AI", what I mean is, they don't need to be developers. I refer to it as "informed consumers". An informed consumer, I'd like to use the metaphor of driving a car. You can be a good driver of the car, and have a relatively good model of what the internal combustion engine does, and have a model of friction, even without being able to build a car from scratch.
So I think that analogy can now be traced into the question that you asked, which is how to build a product. In order to build a product that tries to productize AI technologies, you need people who are conversant in the technological language. You need people who are conversant in the audience language, in the problem out there in the world that you're trying to solve. And importantly, you need people who are able to do the translation.
We talk about this a lot in the classes that I teach. If you are a brilliant business visionary, but you don't understand what is a big ask from your engineering team and what is a small ask, then you'll mis-specify which problems are low hanging fruit, etc., and you will alienate your engineering team, etc. So I encourage my students to take classes in AI, so that they're conversant in the language. Not because they need to build the car, but because they want to drive it in multiple speeds, I suppose.
Chris Potts:Well, that's wonderful. Thank you so much for doing this, Amir. This was a great conversation.
Amir Goldberg:Yeah, that was really, really interesting. Thank you.