This week, Google announced a new measure that its leadership hopes will represent a positive new step in how the tech industry handles valid privacy concerns. The tool, called TensorFlow, allows AI to practice a new function called federating – collecting decentralized data from across multiple devices – and deriving conclusions from it through machine learning. The human analogy would be something like a detective or an analyst: poring over multiple, disparate bits of information, making sense of it, and coming to a reasonable conclusion.
TensorFlow’s product management team posted a Medium article explaining the process of AI and machine learning used to derive meaning from individual devices, rather than analyzing a chunk of data Hoovered up from a tech giant and plopped into a central server. TensorFlow is already being used by some of the largest, most successful companies in the world, such as AirBnB, CocaCola, Twitter, and Intel. At first glance, it TensorFlow’s abilities seem promising, but a deeper read into the Medium piece gives one the feeling they are more pleased with this AI’s capabilities more generally, rather than placing data privacy at the top of the list of concerns.
The announcement comes at a time when other large tech companies are announcing plans to maintain the privacy of its users: Facebook has declared its intent to manage privacy more effectively by announcing a “privacy-focused” reorganization of the social media platform. It’s a bit hard to know how Facebook plans to do this, as literally galaxies of private data points are sitting on servers or are otherwise available to Silicon Valley without users’ permission. It may be a challenge to put the genie back in that bottle, but something’s got to be done, and TensorFlow’s new AI capability seems to point to what more tech companies could be doing down the road.
According to CNET, TensorFlow’s AI makes an educated guess by essentially sidling up to the data on a user’s phone without plucking it from the device itself. “The algorithm applies what it already knows to the data on your phone, such as suggesting replies to emails, and creates a summary of what it learned in the process to send back.” This attempt to satisfy the need to collect data without explicitly doing so, so it does the next best thing: harvests what the data is telling it.
This effort is very well and good. But what Google or TensorFlow don’t explain in detail is how this new process is designed to keep data private and secure. The lessons derived from the data may well be anonymized, but data which is collected and stored on a server can be anonymized too. It seems as though TensorFlow is describing how AI could be trained, over time, to derive more and more information from computers and mobile devices without having to necessarily access private data. But this effort, however much appreciated, still falls far short of the governmental regulations needed to rein in Big Tech’s overstepping. The answer to data privacy is not just new AI capabilities, but transparency about data collection as stipulated by governmental bodies.
The Silicon Valley community needs to take further steps to demonstrate its willingness to partner with the public sector and respond to public concerns, rather than rejoicing over it ability to launch new nimble products.