Any powerful new technology will have unexpected and complex influences on society.
Many of those influences will be positive, but there will inevitably be negative ones also. As an AI business, it is our responsibility to look out for ethical problems on the technological horizon. These can be subtle and hard to spot, but when we see them, we should be ready to take action to avert them and ensure that our company, and the industry of which we are a part, is a force for good.
Fortunately for us, and less so for Mark Zuckerberg, events transpired earlier this year which brought our attention to some of the more subtle threats that irresponsible use of big data analytics can pose to society. This has allowed us to be proactive, and take a look at how Pixoneye can lead the way in how data companies can take their responsibility to society more seriously.
The Current Data Controversy
Earlier this year many people were deeply unsettled by the revelations surrounding Cambridge Analytica and their political activities using Facebook. The firm used the profiles of 87 million Facebook users to target political messages and influence votes including the US presidential election and the UK Brexit referendum.
We at Pixoneye were also very troubled by the story and were even mentioned in an article about the scandal which was published in the Guardian. As a result of these events, we started to talk among ourselves about how we can be as different from Cambridge Analytica as possible.
The way we see it, there are two major legitimate concerns regarding data analytics that have been hot topics in the past months:
The Echo Chamber Problem – The use of big-data analytics to influence society in a divisive manner.
The Privacy Concern – companies sharing or collecting personal data without the explicit consent of the subject.
The Echo Chamber Problem
This is the more insidious problem of the two, and one that hasn’t really come into the light until the recent news stories on Russian tactics of political influence, and the Cambridge Analytica story.
Having said that, it has been quite clear for several years now that Facebook and Google’s ability to segment users by political affiliation has contributed to widening ideological divisions in the US and Europe. This increasing division has been partly caused by “filter bubbles”, an unfortunate side-effect of newsfeed personalisation and targeted political campaigns.
Filter bubbles occur when targeting algorithms ensure that people are only exposed to information they already agree with and therefore become isolated from opposing viewpoints.
We think the Cambridge Analytica whistle-blower, Christopher Wylie, expressed the problem with targeted political messages very well:
“Instead of standing in the public square and saying what you think and then letting people come and listen to you […], you are whispering into the ear of each and every voter and you may be whispering one thing to this voter, and another thing to another voter.
We risk fragmenting society in a way where we don’t have any more shared experiences and we don’t have any more shared understanding. If we don’t have any more shared understanding, how can we be a functioning society?”
We agree that this side-effect of targeting algorithms is a real problem, and so we want to prevent our technology from being used by anyone across the political spectrum to “whisper into the ears” of our users. We don’t want to be used as a tool to divide, and we want to avoid becoming an artificial barrier between groups that should be having open dialogue.
We will make this commitment in two ways:
Firstly, we will add to our provisions of service that Pixoneye cannot be used to target political messages to end-users.
Secondly, we want to publicly commit to never developing any segmentation characteristics relating to political affiliation, religion, race, or sexuality. We believe that these characteristics could be used to discriminate against minority groups or to sow division between groups.
We hope that these measures will inspire other data analytics companies to make a similar commitment to fighting the dark side of data.
The Privacy Concern
This is something that Pixoneye has always been very strong on. Privacy is the foundation of our business. You can read a detailed post about the many reasons Pixoneye is so private and secure here, but if you’re short on time here is a short summary of a couple of the main reasons.
1. Pixoneye provides 1st-party data – When you download an app which uses Pixoneye, the profile that the company gathers on you can be used to personalise your experience of their services, but it cannot be shared with other companies or sold into a marketplace without your prior permission. It’s a 1st party, explicitly consensual data relationship. We (Pixoneye) cannot sell or share your data either and keep it ring-fenced on our secure servers in Belgium.
This is one of the reasons why we see Pixoneye as the solution to the problems with the current data ecosystem where your data is sold into a huge marketplace where anyone can buy it, or simply given away by Facebook. We provide the personalisation while allowing you to decide which companies you trust to see your profile and handle your data responsibly.
2. On-device analysis means we never actually have your data – Pixoneye’s analysis happens entirely on your device: we don’t upload any photos to servers. This means they can never be leaked, or even looked at by a human being.
The only actual data Pixoneye ever gathers is the simple profile of probabilities that our AI has passed from your device to us – a dataset that is inherently anonymous and secure. This is a ‘black box’ that provides leak-proof, hack-proof personal data.
If you’re interested in learning more, please read our in-depth white paper about how Pixoneye is extremely private and secure here.
Speak to us today about how we can help you understand your users better whilst still respecting your customers’ privacy