in , , ,

AI Researcher Zhouyan Liu on Automated Technology Misuse, Digital Surveillance

AI Researcher Zhouyan Liu on Automated Technology Misuse, Digital Surveillance - top government contractors - best government contracting event

Zhouyan Liu is fixated on ideas of data, privacy, property and technology. His work as a graduate student at the University of California, Berkeley zeroes in on how burgeoning artificial intelligence systems impact the aforementioned concepts in both the U.S. and abroad, in his home country of China.

Liu has also recently begun work as part of the AI Policy Hub, a joint interdisciplinary initiative between UC Berkeley’s Center for Long-Term Cybersecurity and the CITRIS Policy Lab that is training forward-thinking researchers to develop effective governance and policy frameworks to guide AI, today and into the future.

ExecutiveBiz spoke with the researcher — who has also worked in journalism and the technology industry — about the potential misuse of AI in both the public and private sectors; how digital surveillance and incursions on privacy exist in both China and the U.S.; and more.

What has your role and principal focus been while working with the AI Policy Hub?

Zhouyan Liu: My role is as a graduate student researcher. It’s our first cohort and they support six graduate students to do their own research on AI technology and all of its social and technical aspects. As for my principal focus, there are two parts. The first part is a peer review-style article on digital surveillance and the second part involves some policy recommendations or deliverables for the public or policymakers. So, we [at the AI Policy Hub], all focus on different areas. I’m from the social sciences side, while some of the other participants are from the more technical side.

What do you see as the most pressing concerns or most glaring potential breaches in acceptable operations within artificial intelligence technologies?

Zhouyan Liu: The primary concern is my research topic — digital surveillance. Now, it is possible for the government to use AI technology to get more information and repress some protests and other social movements. For authoritarian regimes, they’re facing a dilemma: you can repress the protest, but you need to have a very strong police force. These movements are a threat to politicians’ control because if the police department is too strong, they can take over those in power. So-called police states, such as East Germany, always face this kind of dilemma.

My argument is that technology provides these governments with an alternative. If they combine the AI technology or surveillance system and the information collection system with an informal political control, like local governance, they might be able to repress the protest at the same time, and they don’t even need to maintain a very strong formal police force. Therefore, technology can give authoritarian states a second life—which is my concern.

In classical theory, authoritarian states are in a disadvantaged place in terms of information collection, because in a democracy, you have feedback systems, such as elections, and you know what people like and don’t like. You also have free media and other kinds of mechanisms to provide feedback on what the people are thinking and their discontent. 

But in autocracies you don’t have these kinds of institutions. Political scientists assume that autocracies are in a bad place because they cannot gather accurate information. However, when applied to today, things might change because now our life is all digitized. What you say to others, what you do, what you consume, or where you travel are all examples of information that can be collected through digital systems. In authoritarian states, there is not much restriction on privacy rights for personal data or information protection.

Have you done research or thinking about the ways that the private sector or commercial companies misuse AI? 

Zhouyan Liu: Before I came to UC Berkeley, I was a reporter. I worked for a Chinese magazine and then I worked for a technology company in China for a short period. 

The first aspect is about privacy. I think some social media platforms are misusing it, but this depends on your definition of misuse. Social media companies  want the business profit, so they have a strong incentive to collect as much information as they can.

The second aspect is the antitrust problem. Many people criticize Amazon because they have all the data, so they know what products are most popular on their platform. Amazon can then make a popular product on their own and disadvantage other businesses. Smaller businesses don’t have access to the same data as Amazon, so they don’t know what is popular. This strategy  is a new form of monopolization that I think can be misused because it’s not a fair game. 

The third aspect is related to the First Amendment. Freedom of speech does not apply to conversations hosted by private companies. So if Twitter or Facebook want to delete someone’s posts or deplatform them entirely, they can do so because they’re a private company.

The fourth aspect of the misuse of AI technology in the private sector is data property. For example, if you register on a social media platform,they have your data and they use the data to make profit, but at the same time, they also provide you some service. This seems like an exchange, right? Because you don’t need to pay for Google or Twitter. But is it really a fair trade? How should I evaluate my personal information? $50? $10,000? Half a million dollars? It’s very hard to say and it’s a very abstract concept. I think this is a huge misuse.

We need researchers to develop a system to measure how much our information is valued because data is our property and there should be a fair trade for it. Currently, I think it’s a market failure, because private companies have taken advantage of citizens.

I’m curious about your personal research on the digital surveillance system of China and its impacts on privacy rights. What do you see in terms of these digital surveillance issues in the US government? Have you given any thought to any comparisons or if there are echoes of those same issues that you’re seeing in China?

Zhouyan Liu: All governments want the same thing: to collect more information and have more control. This is seen in several reports, such as, the US government’s secret operations to surveil peoples’ mobile phones. Ultimately, I don’t think there is much difference in terms of intention [between the US and China], however, it differs in reality. What [governments are] able to do varies in some countries—democracy, the rule of law and free media all protect people from the government’s abuse of AI-related technologies. On the other hand, if there is a loophole in the US legal system, the government can also do very bad things because they have more resources.

There is a common misunderstanding that Chinese data and the Chinese government is abusing technology because of the absence of laws, but actually, there are very strict data security laws in China. China has a lot of information protection — it just regulates the private sector primarily. The government is also separately regulated, which doesn’t mean that no laws exist. In terms of some of the obvious examples, like surveillance cameras or police stations, these are more prevalent in China than in the US. 

I am also concerned with the technology decoupling strategy. Less cooperation might make it harder to regulate the ethical problem. A genetically modified baby was born in China in 2018, which was a big scandal at the time. However, many people in China are quite excited about new scientific progress—the culture and norms are very different. I think it’s reasonable to predict that in the future, there will be similar developments; what if we have the technology to really build a science fiction-style AI human being? 

While you have mature laws and ethical standards in the Western world, the world is already decoupled. Perhaps countries will just build an AI human being, which will be very hard to control. There are lots of problems with unregulated scientific progress, and I think that is why the decoupling strategy might bring some unintentional problems in the future.

Sign Up Now! ExecutiveBiz provides you with Daily Updates and News Briefings about Executive Spotlight

DISA, Air Force Award 2 Task Orders to SOSi on MPE-ES Contract; Julian Setian Quoted - top government contractors - best government contracting event
DISA, Air Force Award 2 Task Orders to SOSi on MPE-ES Contract; Julian Setian Quoted
Boeing to Deliver Additional Block II Chinook Helicopters to Army Under $63M Contract - top government contractors - best government contracting event
Boeing to Deliver Additional Block II Chinook Helicopters to Army Under $63M Contract