Elizabeth Warren has long been the antitrust candidate of the Democratic Party, and she has made clear her intention is to “break up” large tech giants such as Amazon, Google, and Facebook. (How she plans to do this remains to be seen.) It seems as though Facebook responded to this criticism by deleting Warren’s ads yesterday during the SXSW conference in Austin. The Silicon Valley-based company apologized, blaming an incorrect use of the company’s logo, and restored the ads later that day. A Facebook spokesperson told Politico that the ads would be restored in the interest of “robust debate.”
The irony in this situation does not go unnoticed. Facebook uses consumer data without permission to target specific users, and shows them ads based on what the data reveals to be their preferences. And yet Warren seeks to gain influence and reach voters, so she uses ads on the social media site she wants to break up. It’s confusing, and leaves one wondering who is in the right. Does Warren plan to regulate a social media platform only after she sails to victory? Is Facebook wrong for perhaps trying to censor speech that they find unfavorable to their business practices? Is Facebook a villain for presumably delivering votes to Trump, but a hero for potentially giving a bump to a progressive candidate like Warren?
In spite of the confusion, one thing remains clear: by removing Warren’s ads, even temporarily, Facebook acknowledges its significant role in influencing consumer behavior in election cycles. Despite its repeated statements that Facebook is a neutral party guided merely where the invisible hand of the market leads it, it seems the opposite is true.
Whatever the reason for Facebook’s deleting of Warren’s ads, the tension between government and the tech sector seems to be rising, with no end in sight. Facebook still allows ads, regardless of hateful or untrue content, to target vulnerable communities that may be more susceptible to incendiary content that is not grounded in empiricism. They also permit unorthodox groups like the Flat Earth society and anti-vaccination believers to build communities around non-scientific beliefs that could potentially cause public health crises and erode trust in America’s institutions.
In the future, will it be only Facebook’s business practices that will be regulated? Or will the content be regulated as well? These questions may seem abstract with no immediate answers, which is why marketers need to consider them, even if their reach is only a fraction of Facebook’s. No longer can marketers rely on automation to reach potential consumers without considering the consequences of their campaigns. Is the copy making outlandish promises that aren’t grounded in reality? Are ads targeting prospects with information that goes against their best interests for the sake of moving them along the sales funnel? The end of this Facebook privacy saga remains to be seen, but both politicians and voters alike have caught on to the tech giant’s business model.