INDIAN ARMED FORCES CHIEFS ON OUR RELENTLESS AND FOCUSED PUBLISHING EFFORTS

 
SP Guide Publications puts forth a well compiled articulation of issues, pursuits and accomplishments of the Indian Army, over the years

— General Manoj Pande, Indian Army Chief

"Over the past 60 years, the growth of SP Guide Publications has mirrored the rising stature of Indian Navy. Its well-researched and informative magazines on Defence and Aerospace sector have served to shape an educated opinion of our military personnel, policy makers and the public alike. I wish SP's Publication team continued success, fair winds and following seas in all future endeavour!"

— Admiral Dinesh Kumar Tripathi, Indian Navy Chief

Since, its inception in 1964, SP Guide Publications has consistently demonstrated commitment to high-quality journalism in the aerospace and defence sectors, earning a well-deserved reputation as Asia's largest media house in this domain. I wish SP Guide Publications continued success in its pursuit of excellence.

— Air Chief Marshal A.P. Singh, Indian Air Force Chief

       

PLA's AI-Enabled Influence Operations

The PLA justifies its social media manipulation efforts based on the perception that the US aims to undermine the CCP regime, leading to the development of AI-driven strategies, including social bots

October 14, 2024 By Lt. General P.C. Katoch (Retd) Photo(s): By People's Liberation Army
The Author is Former Director General of Information Systems and A Special Forces Veteran, Indian Army

 

The PLA is well-placed to adopt generative AI for social media manipulation. The scale of China's digital influence operations are still unknown.

A research report by America's RAND Corporation, citing Dr Li Bicheng (a Chinese military-affiliated researcher and leading expert on mass social media manipulation who is believed to be at least partly responsible for the regime's adoption of these technologies), has analysed how China is manipulating foreign social media. The report published on October 1, 2024 explores the inputs of Chinese strategy, operational planning, and capability development and potential implications of generative Artificial Intelligence (AI) of China's social media manipulation, how China's People's Liberation Army (PLA) has conceptualised and operationalised its approach to cyber-enabled influence operations.

A RAND Corporation report, citing Chinese researcher Dr Li Bicheng, analysed how China is using generative AI to manipulate foreign social media, highlighting the PLA's approach to cyber-enabled influence operations.

Extracting evidence from over 220 Chinese language articles from academic journals and more than 20 English language articles from international conferences written by Li., the research found that the PLA began developing its social media manipulation capabilities by the mid-2010s and began employing them by 2018. It is clearly interested in leveraging AI for social media manipulation, and some PLA researchers conduct cutting-edge work in this sphere. The PLA and China's Communist Party (CCP) are well-placed to adopt generative AI for social media manipulation, including in running social bots for large-scale efforts. The PLA justifies its social media manipulation efforts based partly on the threat perception that the US is seeking to undermine the CCP regime.

Recommendations of the report include the following:

  • The US and other global democracies should prepare for AI-driven social media manipulation by adopting risk-reduction measures, including promoting media literacy and government trustworthiness, increasing public reporting, and increasing diplomatic coordination;
  • The US should conduct an independent, comprehensive evaluation of its information efforts and ensure that the benefits outweigh the costs;
  • Government should consider engaging with Beijing on restricting AI-driven influence operations;
  • Additional research for better understanding of China's foreign social media manipulation, its technical approach and involvement of Chinese companies.

The report notes that the CCP's AI activities run counter to the regime's public statements, in which officials have overtly stated "their opposition" to using AI for disinformation. Also, that the CCP's initial response to social media-fuelled uprisings, such as the Arab Spring, was to crack down on the technology and increasing diplomatic coordination.

The RAND Corporations' first report focused on the planning and strategies behind the CCP social media influence campaigns. But it also took into account the CCP's planning documents for "strengthen international communications capabilities and construct foreign discourse power" as a sequel to the Western use of 'online psychological warfare'. In 2013, the newly elected Chinese President Xi Jinping had called for "launching a public opinion struggle" and aiming to "build a strong cyber army." Stating that China is a victim of accusations coming from the West and that Chinese retaliation is justified, he said, "We must meticulously and properly conduct external propaganda and innovate."

The report found that China's People's Liberation Army (PLA) began developing social media manipulation capabilities in the mid-2010s, employing them actively by 2018, with state-sponsored efforts targeting foreign groups.

China's social media disinformation campaigns was running full hilt by 2019. This was extensively used during the Hong Kong protests, during the COVID-19 pandemic, and in the 2022 US midterm elections. 'Spamouflage', identified as a CCP-led initiative in 2023, began during this period. Over the years, China established new departments focused on online propaganda across several CCP bodies, as well as more cross-department collaboration in foreign influence campaigns. By 2017–2018, large-scale, state-sponsored efforts were actively targeting foreign groups. In 2018, Taiwan accused the PLA of social media manipulation to interfere with Taiwan elections.

Significantly in 2023, Li Bicheng wrote that China's handling of foreign social media was still inefficient and it still required human effort. He recommended the use of AI in the following six-step process:

  1. Discover and acquire key information;
  2. Prepare and select appropriate media carriers;
  3. Produce tailored content for each of the targeted online platforms;
  4. Select appropriate timing, delivery mode, and steps;
  5. Strengthen dissemination across multiple sources by forming "hot spots";
  6. Further shape the environment and expand influence. The aim was to create intelligent posts that can be "automatically" deployed on social media in the most effective way, personalised to target audiences.

China is also developing a simulated environment, or "supernetwork", to examine whether the AI-generated content has the desired effect of swaying public opinion. Some evidence of China's AI-generated disinformation surfaced in 2023, like AI-generated images of the Hawaii wildfire and an influence campaign on YouTube that pushed pro-China and anti-US narratives in topics including a "US–China tech war" and geopolitics.

In 2023, Li Bicheng recommended using AI to automate a six-step process for social media manipulation, aiming to create intelligent, personalised posts that can be automatically deployed for maximum effect.

The RAND study is essentially focused on Chinese influence operations in the US, Canada, Australia etc. At the same time, the report acknowledges that its findings do not capture the scale of China's digital influence operations.

In 2021, the Carnegie Endowment published a paper titled 'China's Influence in South Asia: Vulnerabilities and Resilience in Four Countries', focusing on: - Chinese activities to shape/constraint options for political and economic elites;

  • Tactics to shape/ constrain parameters of local media or public opinion;
  • Impact on local civil society and academia.

This paper mainly talks of China employing influence operations using economic means (investments, trade, BRI), propaganda (domestic and abroad) through networks of cadres and officers installed in party branches in both the state and bureaucracy, even Chinese and foreign-owned private enterprises to increase the spread of propaganda.

The RAND study emphasised the need for global democracies to adopt risk-reduction measures against AI-driven social media manipulation and suggested diplomatic engagement with Beijing to curb such influence operations.

The effect of Chinese influence operation in the South Asian Region (SAR), particularly in India's immediate neighbourhood, is more than evident; be it Nepal, Bhutan, Myanmar, Bangladesh, Sri Lanka, Maldives and Pakistan. In India, influence operations, including the use of bots, is seen more to win elections, boost individual political standing, undercut the opponents, and advance political agenda however sinister that may be (like witnessed in Manipur).

It is already on record that the US entities like META, Soros Foundation and some others were trying to influence the general elections in India in 2024. It would be good for India to closely examine how China's digital influence operations have developed over the years and the acceleration these are getting through AI-generated automation; in order to plan and execute appropriate defence/response against these. Finally, influence operations must become a vital part of our foreign policy, in addition to economic and other types of assistance.