<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>wicked policy problems Archives - The Loop</title>
	<atom:link href="https://theloop.ecpr.eu/tag/wicked-policy-problems/feed/" rel="self" type="application/rss+xml" />
	<link></link>
	<description>ECPR&#039;s Political Science Blog</description>
	<lastBuildDate>Tue, 12 May 2026 08:14:21 +0000</lastBuildDate>
	<language>en-GB</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>Wicked problems are not algorithmic puzzles</title>
		<link>https://theloop.ecpr.eu/wicked-problems-are-not-algorithmic-puzzles/</link>
					<comments>https://theloop.ecpr.eu/wicked-problems-are-not-algorithmic-puzzles/#respond</comments>
		
		<dc:creator><![CDATA[İbrahim Hatipoğlu]]></dc:creator>
		<pubDate>Tue, 12 May 2026 08:18:00 +0000</pubDate>
				<category><![CDATA[All Articles]]></category>
		<category><![CDATA[Digital Governance]]></category>
		<category><![CDATA[Featured]]></category>
		<category><![CDATA[algorithmic governance]]></category>
		<category><![CDATA[algorithms]]></category>
		<category><![CDATA[public administration]]></category>
		<category><![CDATA[public policy]]></category>
		<category><![CDATA[value conflict]]></category>
		<category><![CDATA[wicked policy problems]]></category>
		<category><![CDATA[wicked problems]]></category>
		<guid isPermaLink="false">https://theloop.ecpr.eu/?p=28124</guid>

					<description><![CDATA[<p>Algorithms can help governments manage complexity. But they cannot settle disputes over fairness, dignity and responsibility. İbrahim Hatipoğlu argues that so-called 'wicked' policy problems require political judgement before technical optimisation</p>
<p>The post <a href="https://theloop.ecpr.eu/wicked-problems-are-not-algorithmic-puzzles/">Wicked problems are not algorithmic puzzles</a> appeared first on <a href="https://theloop.ecpr.eu">The Loop</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p class="has-medium-font-size">Algorithms can help governments manage complexity. But they cannot settle disputes over fairness, dignity and responsibility. <strong>İbrahim Hatipoğlu</strong> argues that so-called 'wicked' policy problems require political judgement before technical optimisation</p>



<h2 class="wp-block-heading" id="h-the-promise-of-algorithmic-government">The promise of algorithmic government</h2>



<p>Across Europe, welfare agencies are turning to algorithms to detect fraud and error under real pressures of scale. In the UK, a Department for Work and Pensions algorithm <a href="https://www.theguardian.com/society/article/2024/jun/23/dwp-algorithm-wrongly-flags-200000-people-possible-fraud-error" id="https://www.theguardian.com/society/article/2024/jun/23/dwp-algorithm-wrongly-flags-200000-people-possible-fraud-error">flagged around 200,000 housing benefit claims for review</a>, with official figures showing that two-thirds of high-risk referrals were legitimate. In France, the family benefits agency CNAF has used <a href="https://edri.org/our-work/cnafs-discriminatory-scoring-algorithm-10-new-organisations-join-the-case-before-the-conseil-detat-in-france/" id="https://edri.org/our-work/cnafs-discriminatory-scoring-algorithm-10-new-organisations-join-the-case-before-the-conseil-detat-in-france/">risk-scoring to select benefit recipients for investigation</a>, prompting a legal challenge from civil society groups.</p>



<p>These cases are often described as problems of accuracy, bias, or transparency. They are certainly that. But they also reveal a deeper problem. At scale, risk-scoring may help agencies manage volume. Yet prioritisation is never merely technical. It also shapes who comes under suspicion, and on what terms.</p>



<p>Fraud detection is not merely an information-processing problem. It requires the state to balance efficiency with dignity, suspicion with trust, and fiscal responsibility with social protection. A better algorithm may reduce some errors. But risk-scoring still shapes which cases receive scrutiny, which errors are treated as urgent, and where the burden of explanation falls.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>Wicked policy problems are not wicked because governments lack enough data, but because societies disagree about what should count as fair or legitimate</p>
</blockquote>



<p>This is a central limit of algorithmic governance. Algorithms can classify, predict, and prioritise. But wicked policy problems are not wicked simply because governments lack enough data. They are wicked, above all, because societies disagree about what should count as fair, proportionate, efficient, or legitimate.</p>



<p>When algorithms manage such problems, they do not remove politics. They relocate it.</p>



<h2 class="wp-block-heading" id="h-wickedness-lies-in-competing-values">Wickedness lies in competing values</h2>



<p>Horst Rittel and Melvin Webber introduced the idea of <a href="https://link.springer.com/article/10.1007/BF01405730" id="https://link.springer.com/article/10.1007/BF01405730">wicked problems</a> to describe social policy problems that cannot be solved like engineering puzzles. They have no final formulation, no clear stopping rule and no single test of success.</p>



<p>This describes many areas in which governments now use algorithmic systems, including welfare, migration, policing, education and health. These problems are not difficult only because they involve large volumes of information. They are difficult, above all, because they involve competing values.</p>



<p>Consider welfare fraud and error detection. Fiscal responsibility matters because public money should be protected from fraud and error. But so does social protection. People should not be deterred from claiming support by fear of surveillance or punishment. Dignity matters as well. Poverty should not make citizens permanently suspect. Data can inform these choices, but no dataset can tell us how to rank these values. That ranking is a political judgement.</p>



<p>The value conflict was already there. The algorithm made one way of managing it look technically necessary.</p>



<h2 class="wp-block-heading" id="h-the-neutrality-trap">The neutrality trap</h2>



<p>Algorithmic systems need targets. They require variables, thresholds, categories and proxies. This is not a flaw. <a href="https://theloop.ecpr.eu/the-world-at-our-fingertips-just-out-of-reach-the-algorithmic-age-of-ai/" id="https://theloop.ecpr.eu/the-world-at-our-fingertips-just-out-of-reach-the-algorithmic-age-of-ai/">It is how they work</a>. But in public policy, those technical choices often carry political meaning.</p>



<p>A fraud detection model requires someone to define what counts as suspicious. A risk-scoring tool requires someone to decide which features are relevant. Code gives these choices a technical form, while their political character remains.</p>



<p>The deeper danger is that algorithms can make political choices harder to see. Once a value judgement has been translated into a model, it may appear as an administrative output such as a score, a flag, a ranking or a recommendation. The citizen then confronts not a political decision, but a technical result.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>Algorithms can make political choices harder to see. Citizens may confront technical results, rather than political decisions</p>
</blockquote>



<p>This changes the citizen-state relationship. In Australia’s <a href="https://robodebt.royalcommission.gov.au/publications/report" id="https://robodebt.royalcommission.gov.au/publications/report">Robodebt scheme</a>, an automated debt raising and recovery scheme relied on income matching and averaged income data to raise welfare debts. The process transferred work, proof and risk onto welfare recipients, including people with limited capacity to navigate online compliance systems.</p>



<p>A similar lesson came from the Dutch SyRI case. That system combined data across public bodies to identify people considered more likely to commit benefit or tax fraud. In 2020, a Dutch court <a href="https://www.ohchr.org/en/press-releases/2020/02/landmark-ruling-dutch-court-stops-government-attempts-spy-poor-un-expert" id="https://www.ohchr.org/en/press-releases/2020/02/landmark-ruling-dutch-court-stops-government-attempts-spy-poor-un-expert">halted its use on human rights grounds</a>. Beyond opacity, the case showed how digital welfare systems can turn poverty into a basis for administrative suspicion.</p>



<p>These cases show why 'human in the loop' is not enough. If the algorithm structures the options, defines the categories and shifts the burden of proof, human review may come too late.</p>



<h2 class="wp-block-heading" id="h-what-algorithms-can-still-do">What algorithms can still do</h2>



<p>Public institutions face real pressures to process claims, detect anomalies and allocate limited resources. In these tasks, algorithmic systems can be useful. But the cases above also show how quickly assistance can begin to structure decisions. Systems that flag, score or rank do not merely support administration. They shape who is examined, what must be proved and where the burden falls.</p>



<p>The problem is not that algorithms enter public administration. The problem is that they often enter precisely where public judgement is most needed.</p>



<p>Algorithms should support that judgement, not replace it. They can help officials see patterns and identify cases needing attention. But the more a decision depends on competing values, the more limited, subordinate and reviewable the role of automation should be.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>Algorithms should support public judgement, by helping officials to identify cases needing attention, but they should not replace it</p>
</blockquote>



<p>This requires more than transparency. Transparency may help explain how a model works. It cannot tell us whether the model should be used for that purpose. Nor can transparency settle how efficiency should be weighed against dignity.</p>



<p>Those questions belong to democratic politics.</p>



<h2 class="wp-block-heading" id="h-keep-judgement-in-public-hands">Keep judgement in public hands</h2>



<p>The real problem with algorithmic governance is not that algorithms are always wrong. It is that they are often asked to be right about the wrong kind of question.</p>



<p>Wicked problems cannot be solved by optimisation alone because their difficulty lies in disagreement over values. To optimise, one must first decide what counts as success. In public policy, that decision is precisely what citizens, officials, courts, parliaments and civil society must debate.</p>



<p>Algorithms can help governments administer. But they cannot answer how a society should distribute support, suspicion and responsibility, or how trust, equality, efficiency and fiscal responsibility should be weighed.</p>



<p>When governments use algorithms in wicked policy problems, the aim should not be to remove politics from administration. It should be to keep politics visible, contestable and accountable.</p>
<p>The post <a href="https://theloop.ecpr.eu/wicked-problems-are-not-algorithmic-puzzles/">Wicked problems are not algorithmic puzzles</a> appeared first on <a href="https://theloop.ecpr.eu">The Loop</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://theloop.ecpr.eu/wicked-problems-are-not-algorithmic-puzzles/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
