Research
Operating Philosophy
MARCH 1, 2023
A Guide to Narrative Warfare in the Attention Age: Data, Organic Channels, and Guerrilla Messaging in Propaganda
Contents
In today’s digital “attention age,” propaganda and counter-propaganda campaigns increasingly hinge on fleeting touchpoints of just a few seconds. In this environment, data-driven targeting, organic social channels, and guerrilla-style messaging tactics can rapidly form or distort belief systems before audiences even realize it. This paper explores how these elements are strategically used to promote or discredit narratives, focusing on the proliferation of propaganda via 1–5 second exposures such as headlines, viral memes, short videos, and brief comments.
Introduction
In an age defined by information overload and shrinking attention spans, controlling the narrative has become a key battleground for state and non-state actors. Analysts note that the average human attention span may have dropped to mere seconds in the past two decades, falling from about 12 seconds in 2000 to roughly 8 seconds by 2020. Within those few seconds – the time it takes to glance at a headline or scroll past a social media post – opinions can be seeded and biases reinforced. This paradigm shift has given rise to “narrative warfare,” where propaganda campaigns capitalize on rapid, repetitive messaging and emotional triggers to sway beliefs almost instantaneously. Public opinion today is shaped less by lengthy discourse and more by bite-sized content: a provocative tweet, a shocking 3-second video clip, a catchy meme, or a provocative comment thread. Each of these micro-touchpoints can act as a vector of influence, especially when algorithmically amplified to reach millions. As one analysis observes, social media now “churns out a daily avalanche of non-verified (and possibly fake) news, (mis)information, conspiracy theories and toxic headlines”, blurring the line between truth and falsehood in the public consciousness.
The stakes of this new propaganda landscape are high. Malicious narratives can sow confusion, feed hate, incite violence, instigate public distrust and poison the information environment. A false narrative, if cleverly packaged for the short attention economy, can go viral and “shape the very choices of statecraft”, as seen in conflicts from Crimea to Syria. Foreign adversaries and extremist groups exploit these dynamics to erode trust in institutions, undermine elections, and recruit followers. Conversely, governments and civil society are developing agile counter-messaging strategies to defend the truth and maintain social cohesion. The challenge for democratic governments and national security officials is acute: How can one effectively counter propaganda that propagates “at the speed of scroll”? How can truthful narratives compete in an environment where sensationalism and emotional appeal often trump facts?
This paper addresses these questions by examining how data, organic channels, and guerrilla messaging can be deployed to either promote or discredit narratives in the digital attention age. We place a particular emphasis on the paradigm of 1–5 second exposures – such as headlines, short-form videos, or comments – and how these fleeting touchpoints can rapidly solidify or skew belief systems. Importantly, our exploration is framed around the capabilities and experiences of Moonbrush., a strategic intelligence company at the forefront of narrative warfare and counter-propaganda. Moonbrush's work with government and enterprise clients provides a practical lens through which to view the modern information battlespace. By highlighting Moonbrush's tools (like data analytics platforms and AI-driven intent analysis) and methods (like coordinated cross-channel campaigns and “organic advocacy” networks), we illustrate concrete ways to identify, disrupt, and redirect harmful propaganda. The discussion is geared toward government officials and security professionals, underscoring the strategic relevance, national security implications, and operational effectiveness of narrative control in today’s environment. Through a structured academic approach – including a review of relevant literature, analysis of contemporary case studies (both adversarial and defensive), and a concluding discussion on implications – we aim to provide a comprehensive understanding of propaganda in the attention economy and how it can be managed.
Literature Review
Propaganda in the Digital Attention Age: Classical propaganda theory, from World War I pamphlets to Cold War radio broadcasts, assumed a relatively captive audience and longer-form content. Today’s reality is markedly different. The digital ecosystem has “profoundly transformed the informational landscape”, collapsing the old distinctions between producers and consumers of information. Social media platforms enable anyone to create and disseminate content instantaneously, often without gatekeepers or fact-checkers. This democratization of media has obvious benefits for free expression, but it also creates fertile ground for misinformation and deliberate falsehoods. The formation of public opinion now occurs amid a cacophony of competing discourses where truth and lies co-mingle, and traditional markers of credibility are eroded. In this chaotic environment, narrative supersedes nuance – a pithy claim or emotionally charged story can overshadow complex reality. Research by Habermas and others on the public sphere highlights that democracy relies on informed, reasoned debate; however, the new media milieu often favors speed over deliberation. Rather than structured content, platforms serve up fragments: trending hashtags, viral 10-second clips, and algorithm-curated headlines. These fragments compete for eyeballs in an attention economy where, as noted, humans may spend barely 8 seconds on a piece of content on average. The result is an “avalanche” of information overload and an audience that skims instead of reads, often drawing conclusions from a glance.
Cognitive Biases and Rapid Persuasion: Psychologically, the digital attention age plays to numerous cognitive biases. One well-documented phenomenon is the illusory truth effect, where repetition increases the perceived truthfulness of a statement. Even a single exposure to a catchy falsehood can boost its credibility on later repeats, and repeated misinformation – even if initially flagged as false – tends to feel more true over time. In the social media context, this means that seeing a short claim or headline multiple times (e.g. via shares and retweets) can implant a sense of truth. As an illustrative example, if a user encounters a headline like “Vaccine X Causes Illness” repeatedly in their feed, they may come to accept it, regardless of the source’s reliability. Studies have shown that mere repetition of statements increases belief in them, even months later. Another key factor is the role of emotion and heuristics. People often make snap judgments (using System 1 thinking) when scrolling through content. Short videos or memes that evoke fear, anger, or empathy can bypass analytical scrutiny and imprint a message viscerally.
Misinformation peddlers exploit this by crafting content with high emotional resonance – e.g. a 5-second clip designed to outrage – knowing it will be shared impulsively. Indeed, research indicates a majority of users share articles without reading beyond the headlines. According to psychology,com, over 59% of links on Twitter are posted by people who never clicked through to the content they purportedly endorse. This alarming statistic means that in many cases the headline is the message. Manipulators thus focus on provocative headlines or images that tell a narrative at a glance, confident that most readers will not investigate further. In short, digital propaganda leverages our tendency toward shallow processing: quick impressions, confirmation bias (we accept information aligning with our beliefs), and the social proof of seeing others share the same message.
Organic Channels and “Grassroots” Amplification: A defining feature of propaganda in the social media era is its ability to masquerade as organic content. Unlike overt state broadcasts or paid advertisements, much of today’s narrative manipulation happens through accounts and communities that appear to be ordinary citizens or grassroots movements. Scholars describe this as a shift from top-down propaganda to “participatory propaganda”, where unwitting users amplify messages. For example, many super-spreaders of fake news do not realize the content is false – they share it earnestly, which lends the misinformation a cloak of legitimacy. Adversarial actors exploit this by seeding false stories in forums or groups where they know particular communities will spread them. The role of bots and coordinated inauthentic accounts is also significant: with minimal cost, a single actor can deploy thousands of bots to make a hashtag trend or to flood comment sections, creating a false impression of consensus. As one U.S. Senate inquiry highlighted, hostile actors can “recruit… and spread propaganda almost certainly with minimal cost,” requiring a whole-of-government response to counter it. The content that goes viral through such means often appears “organic” to the average user – it might be a meme shared by a friend or a viral TikTok on their feed – rather than an official propaganda piece. This organic quality enhances the persuasive power, since people tend to trust information shared by peers over that from obvious authorities.
Guerrilla Messaging and Memetic Warfare: The concept of “guerrilla messaging” in this context refers to communication tactics that are unconventional, surprise-oriented, and often stealthy in nature – much like guerrilla warfare in the physical sense. Memetic warfare is one prominent form: it involves the deliberate creation and propagation of memes (unit of cultural information like an image macro, slogan, or hashtag) to influence public opinion. Memetic warfare is recognized as a modern type of information and psychological warfare that uses memes on social media. These memes condense complex ideas into digestible visuals or catchphrases, making them ideal for the short attention span medium. Memes and viral videos rely on humor, shock, or relatability to catch users off guard and embed in their memory. They exemplify guerrilla messaging by striking when the audience is least expecting “propaganda” – for instance, a funny image shared in a casual setting can carry a subversive political message without feeling like a lecture. Guerrilla messaging also includes tactics like hijacking trending topics with disinformation, astroturfing (creating fake grassroots campaigns), and utilizing “unconventional interactions” to promote a narrative (similar to how guerrilla marketing uses street stunts or surprise ads). All these methods aim to leave a lasting impression with minimal, low-cost interventions. The digital landscape offers myriad opportunities for that: a single cleverly edited 15-second video on TikTok can garner millions of views overnight, far outpacing traditional newscasts. It is telling that a recent NewsGuard study found roughly 1 in 5 TikTok videos contain misinformation on major topics like COVID-19 or the Ukraine war. This underscores how short-form video platforms have become hotbeds of propaganda – quick clips with dramatic audio or text that are consumed in seconds and often not vetted by viewers.
Strategic Narrative Warfare – Bridging Data and Persuasion: The literature also increasingly discusses the fusion of big data analytics with propaganda techniques. Pioneering works by communication scholars and the experiences of political campaigns reveal that tailored messaging is far more effective than one-size-fits-all. Edward Bernays, the “father of public relations,” demonstrated as early as the 1920s the power of targeting specific psychological motivators – as in his famous “Torches of Freedom” campaign that reframed women’s smoking as a symbol of emancipation. Modern narrative warfare is essentially Bernays’ principles supercharged by data and AI. Moonbrush's own white papers draw a line from Bernays to today: the blueprint of “segment, tailor, appeal emotionally, and deliver through trusted channels” is now turbocharged by machine learning and ubiquitous personal data. In practice, this means propaganda (or counter-propaganda) can be hyper-personalized. With enough data points on an audience segment’s preferences, fears, and values, a narrative can be crafted to hit the “psychological buttons” most likely to persuade. This data-driven personalization is evident in operations like the 2016 Cambridge Analytica story, where political advertising was micro-targeted based on detailed psychographic profiles. It is also evident in hostile influence campaigns: for example, Russia’s Internet Research Agency (IRA) demonstrated a sophisticated understanding of U.S. social fissures by tailoring messages to specific racial and ideological groups. A U.S. Senate investigation found that “no single group… was targeted more than African-Americans” by the Russian campaign in 2016. Russian operatives created Facebook pages masquerading as Black activist groups, posting content to deter Black voters from voting and to incite distrust, often using subtly racist or incendiary narratives. Over 66% of the IRA’s Facebook ads contained race-related terms – a data-driven strategy to tap into racial tensions and suppress voter turnout for one side. This exemplifies how raw data (on demographics and social issues) can guide a propaganda narrative that is both highly specific and highly effective. The reach of such campaigns is staggering: Russia’s inauthentic Facebook posts and ads between 2015–2017 were estimated to have reached 126 million Americans – roughly half the U.S. electorate. In sum, the convergence of data analytics, behavioral science, and multimedia platforms has given narrative warfare unprecedented precision and scale.
The literature underscores that propaganda in the digital attention age is fast, fractured, and pervasive. It leverages cognitive shortcuts and social networks to embed narratives in the public mind before critical thinking can catch up. At the same time, new analytic tools provide opportunities to fight back or even wage positive influence campaigns by using the same channels and techniques for truth-telling and value promotion. This dual-use nature of narrative techniques – they can serve democracy or undermine it – makes it imperative to study and understand them. The next sections will outline our methodology for examining these phenomena, then delve into analysis and case studies highlighting both negative use cases (malign propaganda) and positive use cases (counter-messaging and narrative defense), with a focus on how Moonbrush's capabilities exemplify the state-of-the-art in t
Methodology
Our research approach is qualitative and interdisciplinary, combining a literature review with case study analysis and industry insights. We surveyed current literature on digital propaganda, social media misinformation, and cognitive psychology to establish a theoretical context (as summarized above). We then conducted case studies of known propaganda and counter-propaganda operations, drawing on open-source reports, governmental hearings, and credible news investigations. These case studies – which include a Russian disinformation campaign, an extremist recruitment effort, and a counter-messaging success story – were chosen to illustrate the spectrum of techniques and their real-world impact.
No human subjects or proprietary data were involved; all information was obtained from published sources. By synthesizing academic research with practitioner knowledge, the methodology aims to yield a holistic view that is both conceptually rigorous and practically relevant. The findings are thus grounded in documented evidence and expert observations, ensuring reliability. We also apply a national security perspective in interpretation, given the target audience of government officials. This means evaluating each narrative technique for its potential threat to public order or its utility in defense, and considering operationalizability (e.g., how a government or contractor could implement the lessons learned).
The subsequent analysis distills the key findings from this research process, structured around the core elements of data, organic channels, and guerrilla messaging. Each element is explored in terms of how it contributes to narrative influence and how Moonbrush's approach exemplifies leveraging that element. Case studies are then presented to ground these findings in real scenarios, followed by a broader discussion on strategy and implications.
Analysis and Findings
Data has become the lifeblood of modern propaganda campaigns. Detailed data on individuals’ behaviors and preferences – often harvested from social media and online activities – allows propagandists to segment audiences and micro-target messages with uncanny accuracy. Moonbrush's philosophy encapsulates this: classic persuasive tactics (identifying fears, desires, social pressures) are turbocharged by “machine learning and ubiquitous data” to push the right buttons for each segment. The analysis found that data enables narrative engineering in several ways:
-
Micro-Targeting: Campaigns can identify niche groups (by ideology, ethnicity, geography, etc.) and tailor content specifically for them. For instance, using data analysis, Moonbrush might discern that suburban parents are concerned about a specific issue and then disseminate ads or stories addressing those exact fears. Similarly, adversaries like the IRA used Facebook’s ad targeting to find, say, young Black men interested in civil rights, and then deliver content to discourage voting. This granularity ensures the narrative “hits home” for the receiver, speaking their language and values.
-
Personalization and A/B Testing: Data-driven campaigns often employ rapid experimentation. Multiple variations of a propaganda message can be floated, and feedback data (likes, shares, watch time) tells which variant is most effective. Those variants are then amplified. Real-time analytics – a capability Moonbrush emphasizes – allow for continuous optimization of the narrative. Our findings highlight that the most persuasive messaging is often discovered through iterative tweaking guided by data, rather than conceived in a top-down manner.
-
Predictive Targeting (Intent Data): Advanced practitioners use data not just to react, but to predict audience susceptibilities. Moonbrush's development of a proprietary “intent data” product exemplifies this approach: by analyzing online behaviors, search trends, and engagement patterns, it aims to anticipate what content a person is most likely to respond to at a given moment. Such predictive insight allows campaigns to get ahead of narratives – e.g., identifying early that a certain false rumor is gaining traction among a demographic, and then proactively delivering counter-messaging to that group before the rumor hardens into belief.
-
Coordinated Multi-Channel Delivery: Data also guides where and when to deliver messages for maximal impact. If analytics show a target audience spends more time on Instagram than Twitter, or watches YouTube in the evenings, the propagandist allocates resources accordingly. Moonbrush's strategy documents highlight an “omniscient” cross-channel approach where a target might see a consistent narrative across their Facebook feed, YouTube pre-roll ads, email newsletters, and even streaming TV ads. By blanketing the individual’s media environment (informed by data on media habits), the narrative gains a bandwagon effect – it seems to be coming from everywhere. Indeed, the analysis notes that when multiple sources echo a congruent message, individuals are “far more likely to accept it” due to confirmation bias.
Taken together, these findings illustrate that data transforms propaganda from a blunt instrument into a scalpel. Rather than broadcasting one message to millions in the hope that it sticks, one can craft thousands of tailored messages for sub-groups or even individuals – a practice sometimes termed “precision propaganda.” Moonbrush's success stories bear this out: in one case, they executed a campaign that created millions of pieces of content (posts, images, videos, infographics), each carefully crafted for specific audiences. The result was a highly nuanced yet widespread influence operation that achieved a dramatic 30-point swing in public sentiment, which would be unthinkable without data-driven precision (we detail this case study later). For government officials, the implication is clear: harnessing big data and analytics is not just a marketing trend, but a national security imperative to both deploy and defend against narrative manipulation.
Organic Channels: Grassroots Amplification and Authenticity
Our analysis underscores that how a message spreads can be as important as what the message is. Organic channels – i.e. non-paid, user-driven dissemination such as social media sharing, community forums, and word-of-mouth – lend credibility and momentum to narratives in a way that official broadcasts often cannot. Several key points emerged:
-
Peer-to-Peer Trust: People generally trust content shared by friends, family, or those they perceive as peers more than they trust institutional messages. Propagandists exploit this by making their narratives appear grassroots. For example, the Russian IRA created thousands of Twitter and Facebook accounts that looked like ordinary Americans (complete with personal names, profile photos, and posts that create the perception of a life's story). These fake personas would interject propaganda into real community discussions – akin to an undercover agent spreading rumors in a town square. Unaware of the ruse, real users would absorb and further share these messages. In one sense, this is “crowdsourced” propaganda.
-
Viral Dynamics: Organic spread often follows a viral curve – starting slow, then exploding. A short message that resonates can be multiplied exponentially by the crowd. One hallmark of the digital age is that messages can achieve global reach without any significant advertising spend. A memetic slogan or sensational video can snowball through retweets and shares. This was seen in extremist propaganda: ISIS recruitment videos, for instance, were often shared and re-uploaded by sympathizers worldwide, allowing them to proliferate faster than platforms could remove them. Government counter-efforts have to contend with this hydra-headed dissemination; taking down one post often means it’s already echoed in hundreds of others.
-
Authenticity via Subtlety: Successful influence campaigns frequently mask orchestrated messaging as personal opinion. Moonbrush's case study of the statewide ballot initiative campaign revealed that a factor in swaying opinion was the subtlety of their approach – messages were designed to be “perceived as organic opinions rather than orchestrated efforts,” which “lent credibility and authenticity to the content". By avoiding overt sponsorship or blatant propaganda cues, they made it easier for the public to accept and internalize the narrative (in that case, turning people against a once-popular policy initiative). This finding emphasizes that narratives stick when the audience feels they arrived at the conclusion themselves, or heard it from people “like them,” rather than being told by authorities.
-
Community Infiltration: A more aggressive tactic involves infiltrating existing online communities (such as forums, chat groups, or comment sections) to seed narratives. This guerrilla approach relies on blending in with the community’s culture, then nudging conversations in the desired direction. We saw evidence of such techniques in multiple instances – e.g., coordinated networks of troll accounts that would bombard specific subreddits or Facebook groups with talking points at opportune moments. The goal is to leverage organic community engagement to amplify the message. If enough real community members start parroting the talking point, it gains legitimacy.
-
Moonbrush's “Organic Advocacy”: On the defensive/constructive side, Moonbrush has developed what it calls Organic Advocacy as one of its capabilities. This involves mobilizing real stakeholders or influencers to champion a narrative sincerely. For instance, in another case, Moonbrush turned a long-shot political campaign into a national movement by orchestrating a bottom-up surge of support. They likely did this by identifying key local advocates and giving them the tools and messages to spread to their networks – a stark contrast to astroturfing because it used authentic voices (albeit guided strategically). For governments, enabling organic advocacy could mean partnering with community leaders, NGOs, or citizen volunteers to spread factual counter-narratives in a way that feels community-driven.
The findings here highlight a double-edged sword: organic channels can rapidly amplify either false or true narratives. The difference often lies in who gets ahead in the information race. If malicious actors seed a false narrative early and widely, by the time authorities respond with corrections, the “fake news” has already deeply penetrated social media circles. We note a poignant insight from Taiwan’s experience: timing and tone are crucial. Taiwan’s government pioneered a “humor over rumour” strategy where they detect a false claim and, within hours, inject a correct message packaged humorously into the same channels. The use of humor and informal tone makes the official correction go viral more easily, often outpacing the original disinformation, and timing is critical because even a 24-hour delay is enough for a toxic meme to embed in memory. This exemplifies how leveraging organic viral spread (in this case, of a funny truth-check meme) can neutralize an organic viral lie.
In summary, organic channels are the battleground where narrative legitimacy is won or lost. Influence operations succeed when they become self-sustaining through user participation. Moonbrush's approach of coordinating narratives across many touchpoints and using authentic messengers is reflective of a broader principle: the narrative that wins is often the one that feels like it came from “us,” not “them.” For operational effectiveness, this means governments and their partners should invest in grassroots engagement and rapid response units that can participate in the fray of social media, not just issue press releases after the fact.
Guerrilla Messaging: Short, Viral, and Unconventional Tactics
The guerrilla aspect of narrative warfare refers to tactics that are nimble, unexpected, and often asymmetric. These are the quick strikes in the information domain – the propaganda equivalent of an ambush or sabotage. Key findings on guerrilla messaging include:
-
Memes and Visuals as Weapons: As noted, memes epitomize guerrilla propaganda. They compress an idea into a simple image-text combo that can be understood in a second and shared instantly. Memes often use humor or irony, which lowers the audience’s guard. Analyzing memes used in propaganda campaigns, we find that they frequently employ classic techniques (as identified by media scholars) such as appeal to emotion, bandwagon (“everyone is saying this”), false dichotomies, etc., but in a tongue-in-cheek format. This makes them persuasive, especially to younger, media-savvy demographics, while flying under the radar of traditional fact-checking. A meme may not be taken “seriously” in isolation, but en masse, memes can normalize extremist or false ideas (e.g., the proliferation of antisemitic or racist memes in certain online subcultures created a gateway to more overt hate narratives).
-
Headlines and Sound Bites: A related guerrilla tactic is the sensational headline or sound bite. These are crafted to grab attention in 1–3 seconds as a user swipes past. Many people form an opinion on an issue by scanning headlines alone. Thus, propagandists write headlines that deliver the narrative punch without requiring the reader to click. For example, a headline like “Government Caught in Massive Lie?” – even if the article is trivial or misleading – can implant suspicion of the government after a two-second exposure. Our literature review showed that according to a study by Penn State University, over 75% of news articles shared on social media weren’t actually read by the people sharing them, indicating the headline effectively becomes the news in the social context. Guerrilla messaging exploits this by front-loading all the necessary persuasive elements into titles, thumbnails, and hashtags.
-
Rapid Deployment and Iteration: Guerrilla messaging thrives on agility. When an opportunity or crisis emerges, propagandists flood the zone with short content. This could be dozens of tweets within minutes of a breaking event, pushing a particular narrative before journalists can even publish a full story. By the time official information is out, the public may have already been “framed” by the initial narrative blitz. A prominent example is how conspiracy theorists and state-sponsored trolls often seize on tragedies (like a pandemic outbreak or a military strike) to circulate “explanations” or blame-assigning rumors immediately, shaping initial public perceptions. They often utilize bot networks to achieve trending status for hashtags or keywords associated with their narrative.
-
Use of Influencers and Micro-Celebrities: A modern guerrilla tactic is to co-opt social media influencers or niche internet personalities to spread messaging. These individuals have pre-built trust with their audiences and can introduce talking points in a casual, off-the-cuff manner during streams or videos. From a narrative warfare standpoint, this is like deploying sleeper agents in the cultural sphere – their followers don’t see them as propagandists, so the messaging lands softly. Moonbrush's capabilities brief hints at influencer partnerships and tailored content delivered via trusted voices as part of a multi-channel narrative reinforcement. In practice, a government might discreetly work with sympathetic influencers to debunk falsehoods or promote civic messages, effectively turning them into force-multipliers for counter-propaganda.
-
Psychological Operations (PsyOps) Elements: Guerrilla messaging often overlaps with what militaries call PsyOps – operations aimed at influencing emotions and behavior. For instance, during conflicts, quick propaganda leaflets or social media messages might be directed at enemy soldiers to demoralize them (“Your leaders are lying, you cannot win”) or at civilians to cause panic. In the digital age, these tactics have moved to social media and messaging apps. Short video clips of purported atrocities (sometimes staged or taken out of context) can be spread in enemy territories to break morale. A notable case is ISIS’s use of short execution videos or victory montages which, though horrific, served to intimidate opponents and exalt sympathizers, all within the span of a brief upload. Counter-operations similarly might use short content: for example, the Ukrainian government’s social media feeds have shared 10-second clips of captured enemy equipment or humorous taunts at opposing forces – psychological tactics to boost public morale and mock the invader, all easily digestible and shareable.
Our overarching finding on guerrilla messaging is that speed, creativity, and surprise confer outsized influence. A 30-second satirical video debunking a rumor can be more effective than a 30-page report, if it reaches the audience in time. This is precisely why firms like Moonbrush emphasize “unmatched capabilities” with “unlimited possibilities” in narrative operations – it’s about being able to deploy the right piece of content at the right moment on the right channel.
Crucially, guerrilla tactics are not only offensive weapons; they are also defensive tools. The same methods malign actors use can be flipped. For example, “prebunking” – inoculating the public against a false narrative before it takes hold – can be done via quick, engaging messages that educate viewers on the common traits of, say, vaccine disinformation. Researchers have found that “psychological booster shots” (brief interventions that warn people about misinformation tactics) can significantly increase resistance to false claims. These boosters often take the form of short videos or interactive content, aligning with guerrilla-style brevity and engagement.
Guerrilla messaging is about winning the rapid reaction contest. It leverages the element of surprise and constant presence – appearing in every feed and chat with concise narratives – to dominate public discourse in the critical early moments of narrative formation. Our analysis indicates that organizations like Moonbrush, which possess the infrastructure to churn out and disseminate rapid content at scale, are particularly adept in this realm. By deploying memes, headlines, and viral content calibrated to resonate in seconds, they can effectively “flood the zone” with their client’s narrative while adversaries are still formulating responses.
Case Studies
To illustrate how data, organic channels, and guerrilla messaging manifest in practice – and how Moonbrush's capabilities align with these phenomena – we examine three case studies. The first two are archetypal negative use cases (propaganda campaigns that advanced harmful or deceptive narratives), and the third is a positive use case (a counter-messaging or narrative intervention that had a beneficial outcome). Each case provides insight into techniques and the impact of ultra-short touchpoints on public opinion.
Case Study 1: Russian Electoral Interference (2016 U.S. Election)
Background: In the lead-up to the 2016 United States presidential election, Russian government-linked entities (most notably the Internet Research Agency, or IRA) conducted a sprawling influence campaign across American social media. Their goal was to exacerbate societal divisions, suppress votes for one candidate, and generally erode confidence in the electoral process. This campaign is perhaps the clearest real-world illustration of data-driven narrative warfare executed via organic channels and guerrilla messaging.
Techniques Used: The Russian operation skillfully combined all three elements:
-
Data: Leaked documents and subsequent investigations showed that the IRA had analyzed American demographics and political fault lines in detail. They knew which groups to target: African-Americans, evangelical Christians, gun-rights advocates, etc. They created content specifically for each, often using insights from American cultural trends. For example, they exploited data about racial tensions, producing content around police shootings and Black Lives Matter to inflame anger or despair. The campaign’s sophistication demonstrated an almost marketing-like approach to “audience segmentation and psychographic targeting”, as noted by researchers.
-
Organic Channels: The Russians set up fake “grassroots” pages on Facebook (with names like “Heart of Texas” or “Blacktivist”) that attracted hundreds of thousands of genuine followers. They posted memes, event announcements, and opinion pieces that looked native to the communities they targeted. One Senate report concluded that no group was targeted more than Black Americans, with content designed to deter turnout – for instance, posts saying “your vote doesn’t matter” or promoting a third-party protest vote. These posts circulated widely as users engaged, unaware of the foreign origin. On Twitter, similarly, a network of bots and trolls posed as Americans, tweeting polarizing messages. Because the content appeared to come from “ordinary” people, it was shared organically by others, snowballing the reach.
-
Guerrilla Messaging: The IRA’s content often took the form of punchy memes and slogans. For example, they circulated images of Hillary Clinton overlaid with derogatory text, or caricatures that could be grasped in an instant. They also latched onto trending hashtags (guerrilla-style hijacking of ongoing conversations) to insert their narratives. A Cornell study even found they used music and pop-culture Twitter debates to distract or sway young voters. In essence, they weaponized internet pop culture for political ends. Timing was another guerrilla aspect: they would release controversial material (like leaked emails or conspiracy theories) at critical junctures to dominate a news cycle. The campaign even organized real-world rallies via Facebook events – an instance of online guerrilla messaging translating to offline action.
Impact: The scale of reach was immense. As noted, Facebook later admitted that content from Russian pages potentially reached 126 million users in the U.S. Twitter identified tens of thousands of bot accounts pushing divisive tweets. While the exact effect on voting behavior is hard to measure, surveys after the election showed a significant number of Americans had been exposed to false stories (for example, the infamous “Pizzagate” hoax alleging a child trafficking ring, which began as an online conspiracy spread via short tweets and YouTube clips). More concretely, in late 2016, polling indicated that belief in various political falsehoods had spiked – a sign that the “illusory truth effect” had set in due to repeated exposures. One chilling outcome: in 2017, an innocent pizza restaurant in D.C. was attacked by an armed man influenced by online conspiracy narratives. This demonstrated how quickly a meme-born fiction could prompt real-world danger. U.S. intelligence agencies concluded that the Russian propaganda effort both reflected and amplified partisan divides, contributing to an atmosphere of suspicion and hostility in the electorate.
Moonbrush's Relevance: If a company like Moonbrush had been active in countering this campaign, what might they have done? Based on Moonbrush's capabilities, we can surmise a few measures: Using data analysis, Moonbrush could have detected anomalous narrative surges (e.g. a sudden flood of anti-vote messages in Black communities) and identified inauthentic coordination. Their Moonbrush data intelligence platform might map how a meme propagates from a single source to hundreds of groups, thereby tracing disinformation supply lines. With that knowledge, Moonbrush could coordinate counter-messaging – for example, injecting positive get-out-the-vote messages and factual correctives into the same communities via authentic voices (local influencers, community leaders). They might also advise government partners on “narrative inoculation,” issuing early warnings about emerging false themes (like “elections are rigged”) and saturating social feeds with simple fact-checks or reassurance messages before the lies take hold. Essentially, Moonbrush's approach of coordinated, cross-channel narrative reinforcement would be deployed in defense: ensuring that wherever the Russian trolls placed a piece of propaganda, a counter-narrative piece (from a credible source) would also appear, preventing an information vacuum that the adversary could exploit.
Case Study 2: ISIS Propaganda vs. Counter-Extremism Efforts
Background: The Islamic State of Iraq and Syria (ISIS) in the mid-2010s showcased how a terror organization could leverage digital media for recruitment and propaganda on a global scale. ISIS dubbed its online outreach the creation of a “virtual caliphate.” This case exemplifies propaganda’s power in the attention age – and also highlights attempts by governments to counter it. The target audience ranged from disaffected youths in Middle Eastern countries to English-speaking converts in Western nations. ISIS’s narrative was one of religious duty, adventure, and defiance against the West, packaged in highly attractive media snippets.
Techniques Used by ISIS:
-
High-Impact Visual Guerrilla Propaganda: ISIS became infamous for its slickly produced short videos. Some were brutal (e.g., showing executions or battle footage set to nasheeds, Islamic chants), aiming to intimidate enemies and glorify martyrdom. Others were aspirational, depicting an idealized life in the caliphate with pious soldiers distributing candy to children – a 2-minute clip to sell the utopia. These videos were often under 5 minutes, with trailers and teaser clips under 30 seconds for quick sharing. They leveraged shock and awe to command attention, ensuring virality (even people horrified by them unintentionally spread the message by discussing it). Social media platforms like Twitter, YouTube, and Telegram were inundated with such content around 2014-2015. Social media “played a crucial role in the establishment of [the] Islamic Caliphate”, analysts noted, underscoring ISIS’s adept use of the Internet for promotion.
-
Organic Network and Peer Recruitment: ISIS propaganda was uniquely effective because it combined top-down messaging with bottom-up spread. Recruits who made it to Syria would often post their own experiences on Facebook or Twitter – essentially becoming organic amplifiers praising ISIS. These personal accounts (blog-style posts, selfies with ISIS flags, etc.) gave authenticity to the narrative that ISIS was a legitimate state and brotherhood. Meanwhile, ISIS recruiters engaged one-on-one via messaging apps, using the content as conversation starters. This peer-to-peer radicalization pipeline meant that a curious individual who saw an ISIS meme or YouTube sermon could quickly end up in a private chat with an ISIS member, receiving tailored encouragement. In this sense, ISIS married data and organic channels instinctively: they targeted individuals who showed interest (data from online behavior) and then pulled them into a social process of radicalization (organic persuasion in small groups or chats).
-
Guerrilla Adaptation and Seeding: When mainstream platforms cracked down (suspending ISIS-affiliated accounts), ISIS supporters nimbly shifted to other platforms or created new accounts – a guerrilla tactic of digital re-infiltration. They also tried creative methods like hashtag hijacking. For example, there were reports of pro-ISIS accounts using popular World Cup or celebrity hashtags to slip propaganda images into those trending topics (so unsuspecting users might stumble upon them). This is classic guerrilla messaging: striking in unconventional venues. Moreover, ISIS was very quick in exploiting news events – when anti-Muslim sentiment rose in Europe, they pushed the narrative “you’ll never be accepted there, come to the caliphate”. When police brutality incidents occurred in the U.S., ISIS propaganda bizarrely tried to recruit disenfranchised Black Americans, highlighting American racism as a reason to join jihad (even though it had little doctrinal connection – it was opportunistic messaging).
Counter-messaging by Governments: Recognizing the ISIS online threat, governments, including the U.S. State Department and allied nations, attempted to counteract it. A notable initiative was the “Think Again, Turn Away” campaign launched by the U.S. State Dept. in 2013-14. It used a Twitter account and other social media to engage directly with jihadist propaganda. The account would tweet rebuttals to ISIS claims, often linking news stories of ISIS atrocities or hypocrisy. They even produced short videos mocking ISIS (one notorious video, “Welcome to ISIS Land,” highlighted brutality to dissuade recruits). The intention was guerrilla-like: meet the propaganda on the same platforms with equally catchy counter-content. However, this effort was widely critiqued as ineffective and even counterproductive. Analysts like Rita Katz argued that the campaign often ended up amplifying jihadists by engaging them in Twitter arguments, and lacked credibility with the target audience. It was essentially government bureaucrats trying to meme-battle with savvy extremists – an uphill fight.
More successful were efforts that empowered credible voices in the Muslim community to speak out. For instance, the UAE and Saudi Arabia quietly funded online series featuring moderate clerics debunking ISIS’s interpretation of Islam. In the West, NGOs and activists (some of them former extremists) took to YouTube to share their stories and dissuade others. One strategy was redirect method advertising: when someone searched for pro-ISIS keywords, they’d be shown ads or suggestions for anti-ISIS content (using data-driven ad targeting to intercept interested individuals and “redirect” them to de-radicalization videos). This had some promise by catching people in that short window of curiosity and giving them an alternate narrative.
Another interesting countermeasure was on the technical side: companies like Twitter and Facebook ramped up AI to auto-detect extremist videos and take them down within minutes of upload. By shrinking the timeframe an ISIS video could circulate, they aimed to limit its reach (though encrypted platforms like Telegram then became ISIS’s refuge).
Outcomes: By around 2017-2018, ISIS’s online presence waned significantly as its territorial defeats mounted. The flow of foreign fighters to Syria slowed, due in part to disillusionment as the reality contradicted ISIS propaganda, but also due to the aggressive content moderation and counter-messaging efforts that made recruitment harder. It’s estimated that over 50,000 Twitter accounts linked to ISIS were removed, and search engines adjusted their algorithms to demote extremist propaganda. The ISIS case proved that narrative dominance can translate to real power (thousands joined because of what they saw online), but also that coordinated counter-narratives can eventually reclaim the information space – albeit requiring a combination of censorship, community action, and on-the-ground events (like liberated civilians speaking out against ISIS).
Moonbrush's Relevance: If Moonbrush were contracted in a counter-extremism capacity, it could provide several value-adds. Using its information analysis, Moonbrush could map extremist narrative networks: identifying key influencer nodes, top messaging themes, and even emerging slang or code words extremists use. This intelligence would help authorities stay ahead of propaganda tropes. Moonbrush could then help craft counter-messaging campaigns that are culturally and emotionally attuned – for instance, advising to use former extremists as messengers (people who left ISIS and can speak to the reality; a tactic already used by some NGOs), which aligns with Moonbrush's focus on using the right trusted channels. Moonbrush's skill in multi-channel coordination might orchestrate a cross-platform campaign where a young at-risk individual sees consistent counter-ISIS narratives on every app they use, delivered by slightly different but harmonious voices (a sports hero condemning ISIS on Instagram, a popular imam discussing peace in a YouTube video, etc.). Essentially, Moonbrush could help flood the zone with positive narratives about community, purpose, and faith that undermine ISIS’s appeal, using the same memetic and data tools ISIS did. The firm’s expertise in intent data might even allow predictive identification of youths susceptible to radicalization (via their online behavior), enabling targeted interventions – though this raises ethical and privacy questions beyond our scope.
Case Study 3: Moonbrush's 30-Point Public Sentiment Swing (U.S. Statewide Initiative)
Background: This case is drawn from a Moonbrush. client success story, illustrating a positive use case of narrative techniques to influence public opinion in a democratic context. The scenario involved a statewide referendum on a policy issue (details anonymized), where initial polls showed over 70% public support for the measure. Moonbrush was brought in by opponents of the initiative who sought to sway public sentiment enough to defeat it at the ballot box. This represents how data-driven, rapid messaging can be used in an above-board campaign to change a prevailing narrative. While not “propaganda” in the malign sense, the techniques mirror those used in propaganda – highlighting that the tools themselves are neutral, with ethics depending on usage and transparency.
Moonbrush's Strategy and Techniques: Moonbrush orchestrated a comprehensive campaign combining deep data analysis, tailored messaging, and high-volume organic outreach. Key elements included:
-
Initial Data Analysis: Moonbrush began by dissecting the polling data and public sentiment drivers. They identified demographic segments and regions where support for the initiative was shallow or based on misperceptions. For example, perhaps urban young voters supported it for reason A, while rural older voters supported it for reason B. This granularity allowed Moonbrush to craft different counter-narratives for different groups, aiming to chip away at support on multiple fronts.
-
Narrative Framing: Using insights (possibly focus groups or social media listening), Moonbrush formulated persuasive counter-arguments to the initiative. They likely reframed the issue in terms more resonant with voters’ values or fears. For instance, if the initiative was framed by proponents as “helpful reform,” Moonbrush might have reframed it as “risky experiment” or “hidden tax” – something that would cause that knee-jerk doubt in a 3-second read. This reframing would then be the backbone of all content.
-
Content Blitz (Guerrilla Messaging): Moonbrush's team produced millions of content pieces over the campaign. These ranged from short videos, memes, infographics, social media posts, to likely op-eds and mailers – a truly omnipresent push. The content was hyper-targeted: each piece was designed for a specific audience segment, with language and imagery customized. One can imagine a Facebook meme for suburban parents focusing on how the initiative might harm schools, while a YouTube video for young professionals might warn about economic fallout – each under the umbrella of the same core narrative but tuned to the audience’s frequency.
-
Organic Amplification and Advocacy: Moonbrush didn’t rely on ads alone; they cultivated an organic advocacy network. The case notes that by hyper-localizing the approach, messages were often delivered by community voices. This could mean recruiting local activists or stakeholders (e.g., a respected local business owner against the initiative) to share content and speak at events. By making the push appear citizen-led, they boosted credibility.
-
Adaptation in Real-Time: Throughout, Moonbrush monitored real-time feedback – social media sentiment, trending queries, etc. They adjusted messaging on the fly. For example, if a particular slogan was catching on, they amplified it; if an angle wasn’t resonating, they pivoted to a new approach. This agile campaigning is only possible with a data command center keeping a finger on the public’s pulse. Moonbrush's integration of AI likely helped in spotting shifts quickly.
Results: The outcome was a remarkable 30-point swing – from ~70% support to only ~40% support for the initiative on voting day. The measure was defeated, stunning observers. This quantifiable shift is rare in such a short span, highlighting the potency of the campaign. Voter turnout analysis showed which groups changed their stance, validating Moonbrush's targeted strategy. Importantly, this case was conducted in a legal, transparent campaign context (not disinformation, but persuasion), yet it mirrored “propaganda” in its systematic manipulation of narrative. It sets a benchmark for how combining big data and AI with a narrative strategy can move public opinion at scale.
Implications and Ethical Considerations: For government and public policy, this case is a double-edged example. On one hand, it shows that with sufficient data and coordination, even a strongly held public view can be reversed within months, through saturating the infosphere with targeted messaging. That could be encouraging for, say, public health campaigns (imagine shifting a population from vaccine hesitancy to acceptance using similar tactics). On the other hand, it raises concerns: the techniques could also be used to manipulate the public against their own interests, depending on who wields them. Moonbrush's case did not involve falsehoods as far as we know (they would argue it was about highlighting the cons of the initiative that people hadn’t considered), but in less scrupulous hands, such a powerful narrative machine could flood voters with misleading information just as effectively.
This case validated Moonbrush's model:
-
Their data-centric approach pinpointed exactly where to focus persuasion efforts (efficient resource use).
-
Their creative content engine was able to produce and distribute messaging at a volume and diversity that outmatched a typical grassroots campaign, ensuring voters encountered their narrative “everywhere they looked” (Moonbrush essentially created an echo chamber that made their message ubiquitous).
-
Their use of hyper-personalization (even tailoring visuals by region or audience as noted in their methodology) likely made each voter feel the campaign message spoke directly to their situation.
-
The subtle, organic feel of the campaign (hiding the puppet strings) meant minimal backlash or skepticism from the public; people weren’t turned off by obvious propaganda, because it hardly registered as a concerted campaign – it felt like a wave of local concern.
In the context of national security, one can extrapolate this to influence operations and counter-operations. If a foreign adversary tried to sway public opinion on a policy (for example, to discourage support for a security alliance), a Moonbrush-like response by the government could neutralize it by blanketing the public with affirming narratives about the alliance, through many voices and media. Conversely, authorities must be aware that adversaries could attempt a “Moonbrush-style” influence push covertly. The case study underscores the necessity for vigilance and the ability to detect when a rapid opinion shift is being engineered, so that defensive measures (like exposing the effort or counter-messaging) can be taken.
Discussion
The above analysis and cases paint a picture of the modern narrative battlefield. Several strategic themes and implications emerge, especially pertinent to government and security professionals:
-
Narrative Control as a Security Priority: It’s evident that shaping narratives is now a core element of statecraft and conflict. Information power can achieve objectives that once required military or economic force – from swaying an election outcome to inciting unrest in a rival nation. The U.S. House Armed Services Committee has explicitly recognized that “propaganda in the 21st century media environment” poses challenges on par with physical threats. Our findings reinforce that: ignoring the information domain cedes ground to adversaries who will exploit it. Therefore, governments must elevate information operations and narrative defense in their national security strategies. This includes investing in specialized units or partnerships (like contracting firms such as Moonbrush) to handle continuous monitoring of information threats, rapid response capabilities, and proactive influence campaigns that support policy goals.
-
Ethical and Legal Frameworks: Democratic governments face a conundrum – how to fight fire with fire without betraying their own values. Techniques like micro-targeting, persona management (fake online identities), and psychological influence veer into ethically gray territory. It’s one thing when used to sell a soft drink, another when used to sway democratic decisions. There’s a risk of undermining public trust if governments are seen as manipulating their citizens, even for benign ends. As our analysis noted, Moonbrush's approach is extremely potent but can be a “dream or nightmare”. The same personalization that boosts engagement can also exploit prejudices or fears, potentially reinforcing extremism if misused. Hence, officials must craft guidelines and oversight for narrative operations: ensuring transparency where possible, focusing on factual content (even if selectively presented), and building in ethics reviews (e.g., not targeting protected personal data or sensitive traits like religion inappropriately). International norms are also nascent – what constitutes unacceptable information warfare versus acceptable public diplomacy? These questions are being debated in policy circles. The goal should be maximizing defensive and truth-promoting uses of these tools while curbing malicious uses. Open communication about counter-propaganda efforts (after the fact, if not during) can help maintain public trust.
-
Whole-of-Society Approach: Our cases showed that adversaries often target society’s fissures – racial divides, youth disillusionment, etc. Strengthening narrative resilience, therefore, isn’t just a task for government agencies or AI algorithms; it requires societal involvement. Education is key: just as citizens learn about cyber hygiene, they need to learn “information hygiene.” Finland’s example of national media literacy education is often cited. In Taiwan, as Audrey Tang described, creating a “prosocial civic infrastructure” for discourse is vital. This could mean government facilitation of fact-checking networks, support for independent quality journalism (the antidote to propaganda is often a populace that has access to and trusts credible news), and empowering community leaders to stand against disinformation. A firm like Moonbrush can assist by identifying which communities are being targeted by hostile narratives and advising on localized counter-actions (for instance, if a certain minority group is being flooded with propaganda, collaborate with that community’s leaders to respond). The emphasis should be on transparency and empowerment: instead of the state simply pushing messages, a more sustainable model is enabling citizens to recognize and reject manipulation.
-
Technology and AI Arms Race: Both propagandists and defenders are leveraging AI. Deepfake technology, algorithmic targeting, and bots will get more sophisticated. Conversely, AI can help detect coordinated campaigns (through network analysis) and even simulate adversary propaganda to test our defenses (“red-teaming” narratives). As noted in the literature, social media companies analyzing trillions of messages have created a new arena of “surveillance capitalism” that can be tapped by anyone willing to pay or infiltrate. National security planners must engage with tech companies – who control the platforms – to ensure timely information sharing and cooperation when threats emerge. The private sector (platforms and specialized firms like Moonbrush) holds much of the data and tools, so public-private partnerships are indispensable. For example, a government might not legally be able to run certain influence operations domestically, but a company could run a public safety awareness campaign that uses similar targeting. Clear protocols for this collaboration, respecting civil liberties, need to be established. Additionally, investment in counter-AI is needed: tools that automatically detect bot-driven trends or synthetic media before they mislead millions.
-
Speed, Agility, and Preparedness: A recurring lesson is the importance of reacting fast. The first narrative often sticks, so being pre-emptive or at least extremely quick is crucial. Governments are traditionally not very agile communicators – bureaucracy and caution cause delays. This must change for the digital battlespace. One idea is having pre-prepared “counter-narrative toolkits” for anticipated scenarios. For instance, NATO could prepare a bank of meme-style responses to common Russian disinfo tropes, ready to deploy when those surface. Some defense departments are training “cyber warriors” in social media engagement and meme warfare. Moonbrush's operational model – with its 24/7 monitoring and ability to deploy content quickly across channels – is something government teams could emulate or outsource. Exercises and simulations (like “wargames” for information war) can help officials practice responding to a sudden propaganda blitz, much as they would for a natural disaster or cyberattack.
-
Measuring Effectiveness: Finally, a discussion point is how to measure success in narrative warfare. Kinetic conflicts have body counts and territory; narrative battles have more elusive metrics: opinions, sentiments, resilience. Moonbrush's case study gave a clear metric (public sentiment shift), but usually it’s not so cut-and-dry. Governments will need to develop better indices of information environment health, such as tracking public trust levels, prevalence of false beliefs in populations, and perhaps even psychological well-being related to exposure to toxic information (since propaganda often correlates with heightened anxiety or polarization). These can serve as KPIs for narrative defense efforts. If a population’s belief in election legitimacy increases after a counter-messaging campaign, that’s a win. If polarization softens over time due to interventions, strategy is working. Continuous polling and sentiment analysis (areas where Moonbrush's analytical tools would be useful) can inform this.
In essence, the discussion highlights that narrative warfare is now a permanent theater of operations. The capabilities exemplified by Moonbrush show what is possible – both constructive and destructive. The onus is on national security leaders to harness these capabilities to defend democracy and open society values, while preventing malign actors from dominating the discourse. The lines between marketing, psychology, and warfare have blurred; a meme or a tweet can be a weapon. As such, defense and security planning must integrate narrative strategies alongside traditional military and diplomatic tools.
Moonbrush's emergence in this space reflects a broader trend: the rise of specialized entities that operate where technology, data science, and strategic communications intersect. Partnering with such entities can give governments an edge in the information contest, injecting innovation and agility that bureaucracies often lack. However, oversight and alignment with public interest must guide these partnerships. A powerful narrative apparatus, if unchecked, could be turned inward or used for partisan manipulation, which would ultimately erode the democratic fabric it’s meant to protect. Thus, a recommendation is to develop clear mandates and possibly even legislation governing domestic use of counter-propaganda (to ensure it targets foreign deception, for example, and doesn’t become domestic propaganda).
Conclusion
In the digital attention age, the battle for hearts and minds unfolds one second at a time. Data-driven insights, organic social networks, and guerrilla messaging tactics have converged to make propaganda a fast-acting, pervasive force that can reshape reality for millions with a few well-placed words or images. This paper has explored how those techniques are employed to both advance nefarious agendas and, conversely, to defend truth and democratic values. The examples of Russian election meddling and ISIS recruitment starkly demonstrate the destructive potential of narrative warfare: adversaries can weaponize our social media feeds and our cognitive biases against us, “sowing doubt and dissension” with minimal cost. At the same time, the successful counter-campaigns and Moonbrush's case study prove that the tide can be turned. With the right strategy, it’s possible to engineer opinion change, to inoculate populations against lies, and to rally public sentiment in positive directions.
Moonbrush's capabilities, woven throughout our analysis, serve as a microcosm of the new paradigm of narrative engagement. The company’s ability to sense, detect, analyze, and respond to propaganda in real-time aligns closely with what governments need in this fightcongress.gov. Tools like Moonbrush's unified data platform and AI-driven content personalization show how precision targeting can make messaging profoundly effective. By coordinating across channels and tailoring messages down to the individual, Moonbrush exemplifies the cutting edge of influence operations – essentially providing a playbook for narrative dominance that, if used ethically, can reinforce national resilience against disinformation.
For government officials and national security professionals, a few concluding points bear emphasis:
-
Speed and Proactivity: The first mover advantage in the information space is critical. Preparatory measures – from monitoring systems to pre-crafted counter-messages – should be in place so that responses to emerging false narratives are swift and sure. The days of reacting with a press conference hours or days later are over; by then, as Taiwan’s “humor over rumour” doctrine notes, the “toxic memes have already entered people’s long-term memory”. We must anticipate and preempt where possible.
-
Collaboration with Experts: Engaging firms like Moonbrush is not just an option but often a necessity, given the technical and creative demands of modern propaganda fights. These experts bring interdisciplinary teams (data scientists, behavioral psychologists, digital creatives, etc.) to an arena that used to be the domain of spin doctors and public affairs officers. Governments should cultivate trusted partnerships to leverage this expertise, while also learning from it to build internal capacity.
-
Holistic Strategy: Narrative warfare cannot be isolated from broader policy and societal context. A strong narrative defense also involves strengthening social cohesion, promoting media literacy, and addressing the grievances that make populations vulnerable to malicious narratives in the first place. It’s notable that propaganda often fails when its target audience is well-informed and confident in reliable institutions. So the long-term solution includes renewing public trust through transparency and good governance – essentially denying the fertile soil in which disinformation takes root.
-
Adapting Legal Frameworks: We must update laws and norms to the reality of foreign influence operations and even domestic use of powerful narrative techniques. This might involve clarifying what constitutes illegal propaganda or psy-ops (for instance, making it unlawful for foreign agents to masquerade as domestic actors online, with enforcement teeth), as well as oversight on domestic influence campaigns to guard against abuses. The goal is not to hamstring our ability to fight bad information, but to ensure it’s done in a way consistent with democratic principles.
In closing, the digital battles of memes and minutes are no less consequential than the battles of tanks and missiles. Victory will depend on agility, clarity of message, and the trust of the populace. As we have shown, a headline glimpsed in 3 seconds can plant a belief, but a well-timed fact-check or an emotionally resonant truth can uproot a lie just as quick. The task before us is to organize our data, our people, and our creativity to ensure that in this contest of narratives, propaganda’s distortion is met with powerful truth and strategic counter-narratives at every turn. The experience and tools of Moonbrush and similar pioneers give reason for optimism – that with ingenuity and resolve, free societies can prevail in the information domain without sacrificing the openness and liberties that define them. The attention age may have shortened the window for influence, but it has also opened new avenues to engage, enlighten, and protect the public mind, if we are prepared to seize them.
Works Cited
Dominic Eggel & Grégoire Mallard, “Narrative Warfare in the Digital Age,” Global Challenges (May 2023) – discusses how social media blurs truth and lies, enabling new forms of disinformation.
Pew Research Center, “The Future of Truth and Misinformation Online,” (Oct 2017) – notes that repetition and social media dynamics fuel belief in misinformation.
Susan Nolan & Michael Kimball, “Study: Few People Read What They Share,” Psychology Today (Dec 2022) – reports that 59% of shared links on Twitter weren’t read by the sharer.
Nadia Tamez-Robledo, “TikTok Is Still ‘Uncharted Territory’ for Fighting Misinformation,” EdSurge (Dec 2022) – citing NewsGuard study that 20% of TikTok videos contain misinformation.
Internet Research Agency Indictment (U.S. DoJ, Feb 2018) and Senate Intelligence Committee Reports (2019) – document Russian IRA tactics: targeting Black Americans, using race-related ads (66% of their Facebook ads) (bbc.com), and reaching 126 million on Facebook (theguardian.com).
House Armed Services Committee Hearing, “Countering Adversarial Propaganda (Emerging Threats),” H.A.S.C No. 114-59 (July 2015) – highlights challenges DOD faces from ISIS, Russia, China propaganda, calls for improved capability to “sense, detect, analyze, and respond” to 21st-century propaganda (congress.gov).
Rita Katz, “The State Department’s Twitter War With ISIS Is Embarrassing,” TIME (Sept 2014) – critiques the early U.S. counter-ISIS social media campaign.
SWI (swissinfo.ch), “Humour over rumour: lessons from Taiwan,” (April 2021) – interview with Audrey Tang on Taiwan’s rapid-response strategy using humor to combat fake news before it spreads.
BBC News, “Russian trolls’ chief target was ‘black US voters’ in 2016,” (Oct 2019) – summarizes Senate findings on Russian disinformation targeting and techniques.
The Guardian, “Russia-backed Facebook posts ‘reached 126m Americans’,” (Oct 30, 2017) – reports Facebook’s disclosure to Congress on reach of Russian content.
Human Insight.
Machine Precision.
Unreal Results.
QUICK LINKS
SUPPORT
LEGAL


