Understanding the Intersection between Science, Technology, and Society
Just two hundred years ago the world looked so different. The majority of people’s lives was driven by the tangible. Science was considered mostly an academic notion and it was raising questions that bothered the minds of few.
Technology used to be mainly in the hands of the governments and it was used to benefit the lives of people, in a way much different than it does today. And to a way lesser extent.
Just thirty years ago if anyone had seen a person holding a device with the features of today’s latest iPhone, they could easily be mistaken for a magician or… an alien.
Today we are so used to living with technology. Most of us are surrounded by smart devices, travel in futuristic vehicles. We read every day about the latest breakthrough of science and share our opinions over social media…
Elon Musk sent a Tesla into space…
We rarely stop to think how we became this way. Do we affect science and technology or do they affect us? How do we keep the balance? Is there a place for morals when science and technology show us the ‘right‘ way?
WHAT IS STS?
The intersection between science, technology, and society (STS) is an academic discipline that studies how society and culture create science and how science affects society in return.
As an academic concept of a new generation, it is considered to be an interdisciplinary subject and has been given multiple interpretations by various schools of thought.
Several major universities have STS programs.
In Harvard University, the program is considered to unite two major streams of scholarship. S&T (Science and technology) and Society.
‘Studies in this genre approach S&T as social institutions possessing distinctive structures, commitments, practices, and discourses that vary across cultures and change over time.
This line of work addresses questions like the following: is there a scientific method; what makes scientific facts credible; how do new disciplines emerge; and how does science relate to religion?’
And then the program would consider the questions of control over science and technology – is it needed, and where would the boundaries sit? They would try to identify the risks S&T may present to ‘peace, security, community, democracy, environmental sustainability, and human values’.
They would consider the questions of:
- ‘How should the states set priorities for research funding?’
- ‘Who should participate in technological decision-making and how?’
- ‘Can and should life forms be patented?’
- ‘How should societies measure risks and set safety standards?’
- ‘Should experts communicate the reasons for their judgments to the public and how?’
In Cornell, similarly to Harvard, the science is considered to be a unity between the fields of S&T and its social dimensions. The program is focusing on studying how knowledge and technology happen within the context of the society, both today and in the retrospect of history.
They study the progress of knowledge from its conception, its transfer, and its transformation caused by societal relations. The way people interact with scientific knowledge – when they use it and when they identify a conflict with it.
When science clashes with the societal norms, where does it fit?
According to MIT, the academic discipline of STS should try and bring more understanding to the human-built world.
A world, where science and technology are no longer constricted to the lab. They have penetrated our everyday lives and they cannot be contained in a separate field.
They affect, and are intertwined, with nature, culture, and history.
At Berkeley, STS is considered to be a multidisciplinary field, dedicated to studying the creation of knowledge, the progress and the results that scientific and technological knowledge produce in other fields. They put greater focus on the way knowledge is created today.
On ‘cutting-edge theoretical and conceptual inquiry, and engagement with public policy.’ Berkeley proudly quote STS as a science of a new generation.
In Stanford, the program is considered an interdisciplinary science, with the only program that can offer both a Bachelor of Arts and Bachelor of Science degree. It is considered to have a large scope, including concepts from:
- Computer Science
- Electrical Engineering
- Management Science and Engineering
- Political Science and Sociology
In Princeton, the program focuses on the cause-and-effect relationship between society and technology. Technology is created by humanity, and comes back to change the way humanity develops. It is a cycle between the possible and the needed.
It is a program for engineers and scientists but also for humanists and social-scientists who want to explore the ‘shaping, development and deployment of technological solutions for the benefit of society.’
FOUR CASES OF INTERSECTION OF SCIENCE, TECHNOLOGY AND SOCIETY
In this section we will give you four historic and scientific examples of the way science, technology and society influenced each other to create complex issues, that fall in the subject of STS.
Where there was a conflict between science, technology and society.
The case of Ford Pinto
Pinto is a Ford model, manufactured and sold by the Ford Motor Company in the United States in the 1070s. It was marketed as the smallest Ford vehicle in The States since 1907, and it was supposed to be the first subcompact vehicle manufactured by the company in the country.
The decisions involved in the model’s design spark a controversy, unheard of until this day. The issue involved mainly the design of the fuel system, and in particular, the placement of the fuel tank.
To begin with, the Pinto was developed in a time of confusion caused by changes in standards for safety. Ford only opted for the 20 mph moving-barrier standard up until 1973, instead of the more stringent new 30 mph moving-barrier standard and they objected to the new regulations.
Next, the fuel tank of the Pinto was fit between the rear axle and the rear bumper by design, in order to comply with the standard design for subcompact cars.
That position of the fuel tank would prove to be detrimental in high speed collision, when fuel leakage was caused, in some cases leading, tragically, to explosions killing the passengers. To make things worse, the rear was lacking structural reinforcement. The rear bumper was called ‘essentially ornamental’.
Early crash tests of Ford models showed the vulnerability even at low speed clashes. Several proposals had been put forward by engineers to introduce changes to the design and make the vehicle safer, however, no ‘proven’ solutions had been reported. The crash results had been tagged ‘inconclusive’.
In 1973, Ford’s Environmental and Safety Engineering division came up with a cost-benefit analysis called ‘Fatalities Associated with Crash Induced Fuel Leakage and Fires’ and later became known as the ‘Pinto Memo’. (The report has later become public because of the lawsuit against Ford) The report was required by the NHTSA (the National Highway Traffic Safety Administration) in order to consider Ford’s objection to the more strict 30 mph moving-barrier safety standard. The report was created as per the safety evaluation standards of the NHTSA.
This is what the famous Mother Jones article, the ‘Pinto Madness’ has to say about the Pinto Memo:
‘Ever wonder what your life is worth in dollars? Perhaps $10 million? Ford has a better idea: $200,000… In order to be able to argue that various safety costs were greater than their benefits, Ford needed to have a dollar value figure for the “benefit.” Rather than be so uncouth as to come up with such a price tag itself, the auto industry pressured the National Highway Traffic Safety Administration to do so. And in a 1972 report the agency decided a human life was worth $200,725…[later] rounded off to a cleaner $200,000, in an internal Ford memorandum…
This cost-benefit analysis argued that Ford should not make an $11-per-car improvement that would prevent 180 fiery deaths a year… The memo argues that there is no financial benefit in complying with proposed safety standards that would admittedly result in fewer auto fires, fewer burn deaths and fewer burn injuries…’
1974, the NHTSA was petitioned by the Center for Auto Safety petitioned to recall Ford Pintos because of their faulty fuel system design that has allegedly resulted in three deaths and four serious injuries in rear-end collisions at moderate speeds.
The NHTSA concluded:
‘1971–1976 Ford Pintos have experienced moderate speed, rear-end collisions that have resulted in fuel tank damage, fuel leakage, and fire occurrences that have resulted in fatalities and non-fatal burn injuries … The fuel tank design and structural characteristics of the 1975–1976 Mercury Bobcat which render it identical to contemporary Pinto vehicles, also render it subject to like consequences in rear impact collisions.’
Ford proceeded to voluntarily recall the Pinto vehicles in advance to the NHTSA publishing an official order for it, fearing additional damage to the company’s public reputation. Ford recalled 1.5 million cars from the Ford Pinto model and the Mercury Bobcat.
The recall would become the largest recall in automotive history at the time. Ford would never admit the fault in the fuel system. Rather, they would claim the recall was done to ’end public concern that has resulted from criticism of the fuel systems in these vehicles.’
More than a hundred lawsuits were brought against the company in result of the accumulated rear-end accidents of the Pinto model.
There you have it. Our first case of intersection between science, technology and society. It is a claim that brought up several important questions to society:
- Can there ever be a price put on a human’s life?
- Are there companies that are doing the same in a less public way today?
- How much safety is enough safety?
- With Big Data, we may have a way to calculate the investments in safety measures, and cross-check with the risks, resulting in a comparative analysis of how much each public company values human life. Should we publish those findings, even if we do not have proof of ill intentions?
- Were Ford justified in their actions because they have calculations and prices to pursue?
The case of social media and privacy
Social media have become intertwined with our everyday lives. We use it to text, post our photos, our statuses, share lifetime events, such as taking your driver’s license exam, or giving birth.
We use it to find, share and comment news. To educate ourselves. To take surveys, to get information. To listen to or create music. To consume or produce videos. We use it to make business connections and network. To apply for jobs and research employers.
We know we benefit. And we are not balanced in our opinions of the gains and the cost. We may think about our privacy but we don’t think much before we sacrifice it for entertainment. And we rarely even consider the risks on our personal security.
In the age of data mining, advanced analysis of human conduct in interpersonal organizations can be performed without breaching private information. Yet social media rarely take measures to safeguard the user. Protection is extremely scrutinized.
And individuals are very eager to forego some privacy and expose themselves to an adequate level of danger.
Statistics tell us that users are most cautious about their use of Facebook. As the most scrutinized social network in the news, they also provide the most ways for protection:
- Restrict the visibility of the active users
- Set the control on how others can find you
- Block the users for their photo tag
- Set login Alerts
- Block Spam Users
- Control who can message you
But not all popular social networks follow suit. None of the three, Twitter, LinkedIn, nor Google+ offer the same options for protection.
Users who do not protect their profiles leave themselves exposed to various attacks:
- Privacy Breach
- Passive Attacks
- Active Attacks
Privacy concerns are very weak in society and the methods provided by social media to conquer those are ineffective. A user’s attempts to take measures about keeping their social media privacy is considerably lower than other types of security operations within the company.
Moreover, the majority of social media users are not educated about the risks of exposing their private data in social media, and the social media companies are not taking the appropriate measures to educate them or to make privacy management adequately easy or understandable.
Multiple shortcomings and setbacks can be identified on the technical side of privacy and safety measures.
It is obvious that some policies that can be enforced, aren’t. Even though social media companies realize the benefits of those for security, they do the cross-analysis with convenience for the users and consciously take the choice NOT to force them to:
- Use a strong password
- Changing passwords often
- Require antivirus or related software
- Keeping up with high security measures
And there you have our second case of intersection between science, technology and society. Here, we can also distill several important questions:
- Are social media networks doing a cross-analysis between convenience and security?
- Should social media networks be forced to impose the rules of higher security or should they keep the convenience to the user?
- Should social media networks work harder to inform the user of the security risks even if it is damaging to their business?
- Are they the best actors to perform that education?
The case of technology affecting politics
Or in other words, the case of Cambridge Analytica.
In line with the previous case, it is once again a matter of data and social networks. But this time we are looking at the way an, at first glance, legitimate technology has come to sway an election in the most powerful democracy on Planet Earth.
March 2018. A whistleblower reports to The Observer that Cambridge Analytica, a company owned by Robert Mercer, and headed by Steve Bannon, has had unauthorized access to the personal information of millions of Facebook users, and used that access to sway the public opinion towards the, then, candidate for the president, Donald Trump, back in 2014.
‘We exploited Facebook to harvest millions of people’s profiles. And built models to exploit what we knew about them and target their inner demons. That was the basis the entire company was built on.’ – says Christopher Wylie.
The data was collected via an app, called this is your digital life. Hundreds of thousands of users were profiting by the app, taking a personality test and submitting an agreement for their data to be collected for academic use.
That permission, of course, was overstepped. The app also collected data from the social media profiles of its users’ friends, exploiting a vulnerability in Facebook’s security policy and exponentially growing its reach to tens of millions of people.
The scandal is an example of unprecedented data harvesting, and it raises valid questions about the role Facebook and other social media may have on serious political events such as the US presidential election. Moreover, it comes only weeks after indictments of thirteen Russian nationals by Robert Mueller with accusations they used the platform to ‘perpetrate “information warfare” against the US.’
The whistleblower had collected a dossier with proof of the data harvesting from back in 2014 and presented it to the authorities. The documents contained a letter from Facebook from 2016 acknowledging the network was aware of the issue. A lawyer, representing the network was asking Cambridge Analytica to immediately delete the data they had acquired without authorization.
Cambridge Analytica had spent nearly a million dollars to collect the data.
‘The algorithm and database together made a powerful political tool. It allowed a campaign to identify possible swing voters and craft messages more likely to resonate.’
And there you have our third case of intersection between science, technology and society. Our questions:
- If big data analysis can be used to sway public opinions in democratic elections, should any attempts be done to restrict the technology?
- If Cambridge Analytica was used to sway public opinion, but not to directly affect the votes, should they face any legal issues whatsoever?
- Can we put a price on any of the two – fair elections and technological progress? And is there any way to compare their value for our life today?
See how The Observer’s whistleblower answers those questions:
The case of Genetic engineering
The last case we will review will have a more humble representation in our article, because it has, so far, not been related to any public scandals.
Is genetic engineering moral? If we look in retrospect, we can start the answer by saying so far it has been mostly beneficial.
It involves directly manipulating the genes of a given organism. While humanity has long been using another form of manipulation, selective breeding, being able to modify or mutate a given gene or DNA at will, speeds up the process significantly.
Those experiments so far have not been reported to bring along significantly unwanted results and at the same time they have considerably contributed to scientific discoveries about how DNA works.
With great power comes great responsibility, however. There is a case to be made against genetic engineering of humans. The ability to improve our DNA is followed by the shadow of eugenics.
If we are capable of ‘producing’ better humans, would that raise questions about disposing of the worse, faulty ones? Creating artificial, objective superiority is a dangerous science.
And there you have our fourth case of intersection between science, technology and society. Our questions:
- If we are capable of producing a generation without cancer, autism, multiple sclerosis and disabilities, isn’t it our responsibility to do so, and spare future children from the suffering of disease?
- And on the other hand, can we afford the risk to dive into the unknown and leave as a legacy to our next generation to sort through the conflict between the ‘superior’ and the ‘inferior’ – a conflict we, as a society, have proven multiple times to be too immature to handle?
Whether we like it or not, science and technology are here to stay. And it is not a good idea to try to constrict them or hinder them.
We need the future, we need the Internet of things. We want our devices to be interconnected and to help us in our lives and to make them easier. And we want technology to penetrate the health industry. We need that direly.
This is where the need of studying STS is most obvious. Someone must ask the difficult questions. And prepare the society for the heavy moral dilemmas and risks that come with progress.
Only time can tell if STS will be a fruitless attempt to put thought in our natural progress that will eventually disintegrate our society and kill us, or it will prepare the society and help it mature for a future without conflict between humanity and technology.
“It doesn’t make sense to hire smart people and then tell them what to do; we hire smart people so …