iPhones are known to have security issues, as has been discussed prior to the Siri release. Siri, as a robotic listening application, started out as a flawed program to begin with. For Apple to better learn how to use adaptive technology, they developed devices that would become updated and reintroduced in later product versions only after taking initial data from users for research and development purposes.
Users know what to expect only by listening to online product specialists, reading articles or reviews, and talking with sales people. Siri does not represent having the capacity to work as a help desk agent, so how may this application succeed in improving digital adaptability? After all, when Siri is initialized, it does not start with a non-disclosure statement or warning message.
It is helpful to have a virtual assistant, but this only assumes that users will read specifications in addition or prior to using voice activated services. This does not explain issues where the device may have captured illicit activities, a clear fault of users, but in which case Apple is not authentic in its apologies. It is not the responsibility of Apple to ensure that its users are acting in compliance with all possible legal areas of interest, but there are a majority of users that do act responsibly and have simply been ignored for the sake of safety. Aside from safety issues, the idea of spying on users is alarming in any situation, but effects will take many years to manifest. Users may not realize their identity has been compromised, their contacts list has been shared for spam, and so on.
Security Threats Continued After Apologies To The Public
Apple has stopped using “response grading” to collect data from iPhone users, so it is safe to assume that such practices were viewed as being unethical by most users and the company itself after criticism and negative response. Apple decided to stop using this practice of spying on users for better appeal and although there was an apology from the company to consumers, no major changes were made to requirements and technical specifications.
When users agree to use Siri services, or any of the network provisions for that matter, they are aware that information has only a certain level of security and no guarantee of complete privacy. Users were made aware of this when they set up contracts to use the service and/or devices, so it is only unethical that the warnings were not as clear as they should have been. Reading the “fine print” is not always effective for customers who are looking for fast solutions to communication needs, those who are handicapped or visually impaired, users unable to interpret legal jargon or perhaps even understand technical specifications.
Apple was not acting responsibly or ethically for allowing customers to approach such an experimental application without greater knowledge of usage processes involved in not only transmitting information, but further analyzing it in development processes that do not benefit users in any direct way. Taking usage patterns and adding new features does not add much to an authentic experience on an already customizable application unless it is profound in terms of accessibility.
It is disappointing that after the Siri breach, Apple has continued to allow security issues to manifest on all iPhones produced to date. The latest security breach has exposed all iPhones to unnecessary data sharing and it comes as no surprise to users who have grown used to a lack of privacy in the first place. After all, are all users aware of what internet privacy really means? It may be a personal choice to enable others to catch a more realistic glimpse of their daily lives. Apple is still at fault for not providing easy to understand disclosures, or at least a cheat sheet type of basic usage instructions.
Customers Should Not Be Seen as Expert Users, Experimental Subjects, Technically Illiterate or Discriminated Against
The customer advice available online is only interesting to users looking for specific tools or service add-ons, problem fixes, and debugging issues. This practice of placing all liabilities on the consumer assumes that products and services will function as expected and not have immediate power supply or initialization issues as a plug and play device with automatic downloadable applications (most if not all of the time). Not all users are aware of supplementary firewall, antivirus or spyware apps nor may they always have access to such updates in realtime.
Apple has failed consumers in the past and has proven that it will act irresponsibly in terms of targeting less knowledgeable users in the process. It is an arrogant decision to produce devices that will behave as spying devices and disregard potential liabilities as signs of generalized consumer ignorance in terms of digital and technical aptitude or overall networking skills.
An unrelated but relevant issue with Apple’s relationship with Google has gone beyond the usual data or privacy concerns. Uighurs were specifically targeted by a recent hack through Google on iOS devices and it is disappointing to see that such companies would inadvertently single out a minority ethnic group by not monitoring the activity that matters most, error reports or security breaches. This leaves other users to believe they are safer in some way, or that spying is an acceptable practice when only affecting specific groups and more or less ignoring specific technical issues elsewhere.
It is possible that users more comfortable with surveillance technologies are better apt to communicate under such conditions, making it ineffective in terms of safety improvements or breaking connectivity constraints. Most users understand that there are limits to freedom that include provisional action over internet use, but it needs to be a more explicit effort on Apple’s part to label the devices with warnings aside from pamphlets or lengthy and often misunderstood contract agreements.
What do you think about the Apple security breach? Let us know down in the comments.
This article originally published on GREY Journal.