I started working at Facebook (now Meta) full time in 2017 after doing internships with the Core Data Science (CDS) team (now the Central Applied Science [CAS] team) in summer of 2015 and 2016. CDS was great, I learned as much there as during my actual PhD at Cornell. I learned to be a good engineer, to interface with world-class systems, and to develop and promote my own projects. I want to say I’m grateful for all of the mentorship there.
But, I did quit my job at Facebook after about 2 years in part because I had some real uncertainty over whether what I was doing was ethically the right thing. Let me set up where I’m coming from before I tell the story. It’s not a long setup or story, so bear with me.
my perspective
I’m not an “algorithms are bad” guy. I don’t really believe that Facebook or other social media companies were wholly or even largely responsible for election issues in 2016. I would suggest people look to Fox News and CNN for blame there. I think it’s true that Facebook was too slow to prioritize anti-election-interference measures—and I know this because I was among the first to work on this at the company in the fall of 2017 around the Alabama Special Election that Doug Jones won. The team was called Election Integrity, there was a war room, you can look it up.
These days, I think Meta is being a bunch of weenies by trying to keep politics off Threads and generally de-emphasizing politics on their platforms. Zuck likes challenges in his cage matches but apparently shies away from the real thorny challenges of creating a habitable online public sphere that includes the full richness of human society. Oh well.
I’m also not an “experiments are bad” guy. The New York Times whines about Facebook and other tech companies experimenting while running literally identical experiments itself to optimize its own platform. Rather than experiments being an issue, there is a duty to experiment. I personally would like to see experiments become public after a period of time and to have better ethical guides, but the idea of experimentation itself is good rather than bad.
my story
Anyway, the story. There was a panic inside Facebook in 2018 around a series of academic articles that ran “deprivation studies.” These studies basically randomly assign people to not use Facebook and then measure their “subjective well being” (SWB), which is basically a bunch of questions you ask people about how they are doing. The internal freak-out was around studies that said Facebook had large negative effects on SWB, particularly for teenagers. Internally, Facebook wants to believe it is “good for the world” (this is the actual question they asked people and wanted to get people to respond “yes” to, I’m not sure if they still ask it).
If you think you are doing good for the world and then there are experimental studies saying you are harming well being, well that’s an identity issue for you. Are you a good guy or a bad guy? It becomes a crisis.
In my opinion, the right way to respond to the deprivation studies would have been to replicate them. Outside researchers often make mistakes about how platforms work. I suggested replicating but that didn’t go anywhere.
So the internal response to the deprivation studies, I think headed by Chris Cox, was to hire a team to fix this. In Facebook-land, the way you fix this is you make a metric, or a number you track widely, and then make product changes to try and make it go up. So the basic idea was to create a subjective well-being metric for everyone on Facebook and then try to optimize it with product iteration. The thing was called Project Glow.
I thought this was both a bad idea scientifically and of questionable ethical merit. On the science side, I was really skeptical you could actually measure and increase the metric and believe your results. There are plenty of technical causal inference things that go into this, and I won’t elaborate here. But basically, I don’t think you could get a reliable measure of SWB that would both generalize to the whole Facebook platform AND be something that you could reliably increase. You’d probably end up increasing a number you believed was SWB but was in fact capturing other things.
The deeper problem is, well, this is subjective well being. It is assessed by asking people personal questions about their life and their mood and so forth. It is related to medical conditions like anxiety and depression. If someone told you in order to use Facebook you would have to consent to them not just measuring your on-site activity but also making informed predictions about whether you were satisfied with your life or depressed, like, I don’t know you’d probably have some concerns. I certainly would. And I really would have concerns if this was done without public knowledge.
So I brought this up. I was assigned as the researcher on the project, and I was told this was an honor because it was so high pri. I immediately told my manager and anyone who would listen that this was a dumb idea of questionable ethical value. I suggested to the PM (the person hired to lead the project) that we publicly disclose what we were doing because of the thorny issues and was told “that would be risky.” Well, yeah, because the idea was pretty questionable. If we stated this project a little differently it would sound completely insane: “let’s predict worldwide who has depression, and keep that number a secret, and try to make changes on Facebook to make that number go down.” Like, what? You can sort of see the good intentions mixed with weird corporate secrecy mixed with hubris.
When I kept pressing on the ethical issues, they literally brought in a company lawyer, got everyone on the project together, and sat me at a table where the subject of discussion (lead by the lawyer) was how “we” could reframe this project to think it’s ethical. By “we” they meant “how can we get you, George, to think this is ethical.” Like, lol. In retrospect I should have quit after that meeting, but I didn’t. I kept complaining and I slow-rolled the project. When I don’t want to work on something, I just sort of don’t. I’ve never been able to make myself.
After a few months of this and several tense conversations with my manager, I got fed up and quit. Everyone seemed surprised I wanted to quit, but that just confirmed to me that people weren’t listening to me. I don’t know what happened to the project—I was told it got killed later but I don’t have any specific knowledge of that.
That’s my story.
facebook is a weird place
The desire inside Facebook to believe you are doing good is really harmful, IMO. It’s a job. It does some good things and some bad things. It allows people to send their nudes quickly to romantic partners. It lets people scam each other on Marketplace. It lets you keep up with you grandmother. There are much more impactful things you can do with your time than work at Facebook, and more innovative things as well. But if you want a stable, good-paying job that is within a few zip codes of the tech frontier, then sure, it’s not a bad place to work. I even re-applied for a job there later on, and was really considering it until I realized it wasn’t going to pay that much more than my current job which is actually good for the world.
I loved this! I’d love to hear more about DS ethics in general from you and others. This one was really interesting because the intention was to do good, but the approach was unethical.
Didn’t Meta later get bad press for going through with this?