The story of AI, told by those who invented it


welcome to I was there when, a new oral history project from In the machines we trust Podcast. It features stories of how breakthroughs in artificial intelligence and computing came about, as told by the people who witnessed them. In this first episode, we meet Joseph Atick, who helped create the first commercially viable facial recognition system.


This episode was produced by Jennifer Strong, Anthony Green and Emma Cillekens with assistance from Lindsay Muscato. It is edited by Michael Reilly and Mat Honan. It is mixed by Garret Lang, with sound design and music by Jacob Gorski.

Full transcript:


Jennifer: I’m Jennifer Strong, host of In the machines we trust.

I want to tell you about something we’ve been working on for a little while behind the scenes here.

It’s called I was there when.

This is an oral history project showcasing the stories of breakthroughs in artificial intelligence and computing … told by people who have witnessed them.

Joseph Atick: And coming into the room, he spotted my face, pulled it out of the back and said, “I see Joseph” and that’s when the hair in my back… I ‘ felt like something had happened. We were a witness.

Jennifer: We start things off with a man who helped create the first commercially viable facial recognition system … in the 90s …


I am Joseph Atick. Today, I am the Executive Chairman of ID for Africa, a humanitarian organization that aims to give Africans a digital identity so that they can access services and exercise their rights. But I have not always been in the humanitarian field. After obtaining my doctorate in mathematics, my collaborators made fundamental breakthroughs, which led to the first commercially viable facial recognition. This is why people refer to me as a founding father of facial recognition and the biometric industry. The algorithm for a human brain to recognize familiar faces became clear while we were doing research, doing mathematical research, while I was at the Institute for Advanced Study at Princeton. But it was far from having an idea of ​​how you would implement such a thing.

It was a long period of months of programming and failure and programming and failure. And one night early in the morning, in fact, we had just finalized a version of the algorithm. We have submitted the source code for compilation in order to obtain a runtime code. And we got out, I went out to go to the bathroom. And then when I came back into the room and the source code had been compiled by the machine and came back. And usually after compiling it automatically runs it, and as I walked into the room he spotted a human moving around the room and he spotted my face, pulled it out of the background. and he said, “I see Joseph. And that was the moment the hair down her back – I felt like something had happened. We were a witness. And I started calling the other people who were still in the lab and each one of them walked into the room.

And he would say, “I see Norman. I would see Paul, I would see Joseph. And we would take turns running around the room just to see how many he could spot in the room. It was, it was a moment of truth where I would say several years of work ultimately led to a breakthrough, although theoretically no further breakthrough was required. Just the fact that we figured out how to implement it and finally saw this ability in action was very, very gratifying and satisfying. We had developed a team that is more of a development team, not a research team, that focused on integrating all of these capabilities into a PC platform. And that was the birth, really the birth of commercial facial recognition, I would say, in 1994.

My concern started very quickly. I saw a future where there was no place to hide with the proliferation of cameras everywhere and the commoditization of computers and the processing capabilities of computers getting better and better. And so in 1998 I lobbied the industry and said, we have to put in place principles for responsible use. And I felt good for a while, because I felt we had done it right. I felt we have a responsible use code in place to follow regardless of the implementation. However, this code has not stood the test of time. And the reason behind this is that we did not anticipate the emergence of social media. Basically, when we established the code in 1998, we said that the most important element in a facial recognition system was the tagged database of known people. We said, if I am not in the database, the system will be blind.

And it was difficult to build the database. At most we could build thousands, 10,000, 15,000, 20,000 because every image had to be scanned and captured by hand – the world we live in today, we are now in a regime where we are allowed the beast to come out of the bag. by feeding him billions of faces and helping him by tagging us. Um, we are now in a world where any hope of controlling and demanding that everyone be responsible in their use of facial recognition is difficult. And at the same time there is no shortage of famous faces on the Internet because you can just scratch, as recently happened by some companies. And so I started to panic in 2011, and I wrote an opinion piece saying it’s time to hit the panic button because the world is heading in a direction where facial recognition is going to be ubiquitous. and faces are going to be available everywhere. in databases.

And then people said I was alarmist, but now they realize that is exactly what is happening today. So where do we go from here? I lobbied for legislation. I have been lobbying for legal frameworks that require you to use someone’s face without their consent. And so it’s no longer a technological problem. We cannot contain this powerful technology by technological means. There has to be some kind of legal framework. We cannot let technology get too far ahead of us. Ahead of our values, ahead of what we think is acceptable.

The issue of consent continues to be one of the most difficult and difficult questions when it comes to technology, just giving someone advance notice doesn’t mean it’s enough. For me, consent must be informed. They need to understand the consequences of what this means. And not just to say, well, we put an inscription and that was enough. We told people that, and if they didn’t want to, they could have gone anywhere.

And I also find that there are, it’s so easy to get wowed by flashy tech features that might give us a short term edge in our lives. And then at the end of the day, we recognize that we gave up something that was too precious. And at that point, we have desensitized the population and we come to a point where we can no longer withdraw. This is what worries me. I am concerned that facial recognition through the work of Facebook and Apple and others. I am not saying that everything is illegitimate. Much of it is legitimate.

We have come to a point where the general public may have become jaded and may become desensitized because they see it everywhere. And maybe in 20 years you will be leaving your house. You will no longer have to expect that you are not. It will not be recognized by the dozens of people you meet along the way. I think at that point the public will be very alarmed because the media will start to report on cases where people have been harassed. People have been targeted, people have even been selected based on their net worth on the streets and kidnapped. I think that’s a lot of responsibility on our hands.

And so I think the issue of consent will continue to haunt the industry. And until this issue is an outcome, it may not be resolved. I think we need to set limits on what can be done with this technology.

My career has also taught me that being too far ahead is not a good thing because facial recognition, as we know it today, was invented in 1994. But most people think it is. was invented by Facebook and machine learning algorithms, which are now proliferating around the world. Basically, at one point I had to quit my position as public CEO because I was cutting back on the use of the technology my company was going to promote for fear of negative consequences for humanity. I therefore think that scientists must have the courage to project themselves into the future and see the consequences of their work. I’m not saying they should stop making inroads. No, you should go all out, make more breakthroughs, but we also need to be honest with ourselves and alert the world and policymakers that this breakthrough has pros and cons. And so, in using this technology, we need some sort of guidance and framework to make sure it’s channeled for positive application, not negative.

Jennifer: I was there when … is an oral history project featuring the stories of people who have witnessed or created breakthroughs in artificial intelligence and computing.

Do you have a story to tell? Do you know anyone who does? Email us at [email protected]



Jennifer: This episode was recorded in New York in December 2020 and produced by me with help from Anthony Green and Emma Cillekens. We are edited by Michael Reilly and Mat Honan. Our sound engineer is Garret Lang… with sound design and music by Jacob Gorski.

Thanks for listening, I’m Jennifer Strong.


Leave A Reply

Your email address will not be published.