top of page

The future of artificial intelligence necessarily requires sociological guidance.

Artificial intelligence (AI) architecture lacks a deep understanding of what data about humans means and how it relates to equity in the race to out-compete other businesses. According to two Drexel University sociologists, we should pay more attention to the social effects of AI because it is appearing more regularly than ever before.


"As part of the attempt to reduce the risks associated with face-to-face encounters, the coronavirus pandemic has accelerated the use of AI and automation to replace human jobs," said Kelly Joyce, Ph.D., a professor in the College of Arts and Sciences and founding director of the Center for Science, Technology, and Society at Drexel. "We're seeing more and more instances of algorithms that exacerbate current disparities. We must address this inequity when organisations such as schooling, healthcare, warfare, and the workplace implement these structures."

Joyce, Susan Bell, PhD, a professor in the College of Arts and Sciences, and colleagues raise concerns in a new paper published in Socius about the drive to rapidly accelerate AI growth in the United States without accelerating the training and development practises required to make ethical technology. A study agenda for the sociology of AI is proposed in this article.


"AI programmes that promote equality include sociology's understanding of the relationship between human data and long-standing disparities," Joyce explained.

What do we mean when we say "artificial intelligence"?


The term "artificial intelligence" has been interpreted in a variety of ways, with early versions associating the term with software that can learn and function on its own. Self-driving vehicles, for example, learn and recognise routes and obstacles in the same way as robotic vacuums learn and recognise the perimeter or layout of a house, and smart assistants (Alexa or Google Assistant) learn and recognise the tone of voice and interests of their users.


"AI's appeal is aided by its fluid definitional reach," Joyce said. "Its broad, but undefined sense allows proponents to make speculative, empirically unsupported claims about its possible positive societal effects."

According to Joyce, Bell, and colleagues, programming communities have largely concentrated on developing machine learning (ML) as a type of AI in recent years. While AI continues to be the public-facing term used by businesses, institutes, and projects, the term ML is more widely used by researchers than AI.

"ML focuses on teaching computer systems how to identify, sort, and forecast outcomes from existing data sets," Joyce explained.

AI professionals, computer scientists, data scientists, and engineers are using existing data sets to train systems to identify, sort, and forecast outcomes. Existing data is fed into AI systems to help them learn to make autonomous decisions. The issue is that most AI practitioners are unaware of how data about humans is almost always data about injustice.


"AI practitioners do not realise that data about X (e.g., ZIP codes, health records, highway location) could also be data about Y (e.g., class, gender, or race differences, socioeconomic status)," said Joyce, the paper's lead author. "They may believe, for example, that ZIP codes are a neutral piece of information that applies to all equally, rather than realising that, due to segregation, ZIP codes often provide information about race and class. As machine learning systems are built and implemented, this lack of awareness has resulted in the acceleration and intensification of inequalities."

"When AI systems discover associations between disadvantaged groups and life chances, they recognise these correlations as causation and use them to make future intervention decisions. AI structures, in this sense, do not construct new futures, but rather reproduce the long-term disparities that exist in a given social world "Joyce explains.

Is AI in danger as a result of institutional prejudice and human bias?


Algorithms, records, and code are all linked to politics. Consider the Google search engine. Even though Google search results tend to be neutral or singular, the search engine replicates sexism and prejudice in daily life.


"The decisions that go into making the algorithms and codes reflect the viewpoint of Google staff," Bell explains. "Their decisions on what to mark as sexist or racist, in particular, reflect the larger social systems of pervasive racism and sexism." As a result, decisions on what constitutes sexist or racist behaviour 'train' an ML machine. Despite Google's claims that users are to blame for sexist and discriminatory search results, the root is in the 'input.'"


"Contrary to the appearance of neutrality, social oppression and injustice is rooted in and reinforced by Google's search results," Bell says.


AI systems that use data from patients' electronic health records (EHRs) to make predictions about effective care recommendations are another example cited by the authors. While privacy is often a consideration for computer scientists and engineers when developing AI systems, understanding the multivalent dimensions of human data is not necessarily part of their education. As a result, they may mistakenly believe that EHR data reflects objective information about care and outcomes, rather than looking at it through a sociological lens that recognises how partial and situated EHR data is.


"You recognise that patient results are not impartial or objective when using a sociological approach," Joyce says, "since they are linked to patients' social status and therefore tell us more about class disparities, inequality, and other kinds of inequalities than the efficacy of specific treatments."


The paper cites examples such as an algorithm that recommended that black patients with the same conditions receive less health care than white patients with the same conditions, and a study that showed facial recognition software is less likely to identify people of colour and women, demonstrating how AI can exacerbate current inequalities.


"A sociological understanding of data is significant since the uncritical use of human data in AI sociotechnical systems would appear to replicate, if not worsen, preexisting social inequalities," Bell said. "While AI companies argue that algorithms or platform users generate racist, sexist outcomes, sociological scholarship shows that human decision making occurs at every stage of the coding process."


The researchers show how sociological scholarship can be combined with other important social science research to prevent some of the pitfalls of AI applications in their paper. "Sociological work examines the design and implementation of AI sociotechnical structures, bringing human labour and social contexts into view," Joyce said. According to the report, building on sociology's understanding of the role of organisational contexts in determining outcomes, Both funding sources and institutional contexts are key drivers of how AI systems are built and used.


Is it appropriate for AI to be driven by sociology? Yes, according to researchers.


Despite well-intentioned attempts to integrate information about social environments into socio-technical processes, Joyce, Bell, and colleagues argue that AI scientists continue to demonstrate a narrow understanding of the social, prioritising only that which is relevant to the execution of AI engineering tasks while ignoring the complexity and embeddedness of social inequalities.


"The highly structural approach of sociology often contrasts with approaches that emphasise individual choice," Joyce said. "One of the most pervasive political liberalism tropes is that individual preference drives social change. Individually, we will create more equitable worlds by creating and selecting better goods, policies, and political leaders, according to the argument. When engineers and ethicists emphasise removing individual-level human bias and enhancing sensitivity training as a way to fix disparity in AI systems, the software community continues to maintain a similarly individualistic viewpoint."

Joyce, Bell, and colleagues encourage sociologists to examine when and how AI processes make inequality more durable using the discipline's theoretical and methodological methods. The researchers emphasise that developing AI sociotechnical systems is more than just a matter of technological design; it also poses fundamental power and social order issues.


"Sociologists are trained to recognise differences in all facets of society and to suggest ways to address systemic social change. As a result, sociologists should take the lead in imagining and forming AI futures "Joyce remarked.


 

READ MORE-

14 views0 comments
bottom of page