Versionsunterschiede von Informationssammlung Corona / Nachrichten




← Vorherige Änderung
NĂ€chste Änderung →


allow="autoplay" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/1052766943&color=%23ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&show_teaser=true&visual=true">
Radio MĂŒnchen · Argumente gegen die Herrschaft der Angst - Dr. Wolfgang Wodarg im GesprĂ€ch




|
===Peter Mayer== ===Rubikon==

Bitte gib einen Feed mit dem Parameter url an. (z.B. {{feed url="https://example.com/feed.xml"}}



===Rubikon== ===Peter Mayer==

Bitte gib einen Feed mit dem Parameter url an. (z.B. {{feed url="https://example.com/feed.xml"}}


<!markup:1:end> url="https://fetchrss.com/rss/613c9[...]f9cf360fa03e4b22.xml" max=5}}

NZZ

XML

Feed Titel: Wissenschaft - News und HintergrĂŒnde zu Wissen & Forschung | NZZ


<!markup:1:end> max=5}}

===Vera Lengsfeld== ===Cane==

Bitte gib einen Feed mit dem Parameter url an. (z.B. {{feed url="https://example.com/feed.xml"}}


<!markup:1:end> <!markup:2:begin>url="https://www.privacy-handbuch.de/blog-von-cane.rss" max="5"}}
<!markup:2:end>

Verfassungsblog

XML

Feed Titel: Verfassungsblog


A General Obligation to Monitor

Notice-and-takedown – this is the core principle of European platform regulation since the introduction of the eCommerce Directive in 2000. It is the principle of an internet in which every user becomes a “content creator”, without prior selection and evaluation of the content. It is the decisive factor in the fundamental difference between newspapers, radio, and television on the one hand and the internet on the other, between one-to-many and many-to-many communication. Although this liability privilege is coming under increasing pressure—legally through exceptions for copyrighted content and special monitoring obligations for violations of personality rights, politically through civil society initiatives such as Save Social, which are calling for the liability privilege to be reviewed— there has not been any fundamental legislative decision to move away from this principle in the last 25 years.

Instead, the revolution now seems to be starting in Luxembourg. In Russmedia Digital, the ECJ ruled at the beginning of December that in cases dealing with data protection violations, such as defamatory content, the notice-and-takedown procedure should not be applied, but rather that the respective platform is (jointly) liable for illegal content from the publication of the content on. Clearly unaware of the enormous implications of its decision for the freedom of expression and information of millions of users in the EU, the Court is thus demanding the establishment of a comprehensive monitoring system for communication in the digital public sphere.

Defamation on Online Marketplaces

The starting point for the ECJ’s ruling was a set of questions referred by a Romanian court. An unknown third person had placed advertisements on an online marketplace (owned by Russmedia Digital, which gives the case its name) that falsely suggested that the applicant in the original proceedings offers sexual services. The advertisement included photos of her, her telephone number and was also reproduced on various other websites. After the applicant informed the owner, Russmedia, about the defamatory advertisement, Russmedia deleted it within less than an hour, but it remained accessible on third-party websites. The applicant then claimed non-material damages from Russmedia, including for violations of her right of personal portrayal and the rights to honour and reputation, and unlawful processing of personal data. After various lower courts initially disagreed on whether the applicant was entitled to damages, the Court of Appeal (Curtea de Apel Cluj) referred the question to the ECJ, essentially asking whether Russmedia (firstly) should be considered the data controller responsible for the processing of personal data – which includes the false information about the offer of sexual services, the applicant’s photo, and her telephone number – and (secondly) whether liability such for unlawful data processing is determined solely by the General Data Protection Regulation (GDPR) or whether the specific liability regime for online platforms applies (while the applicable law in this case was still the eCommerce Directive, the relevant provisions have since been transferred to the Digital Services Act (DSA)).

The issue of responsibility

Are operators of online platforms (or, in this specific context, online marketplaces) responsible for illegal content on their platforms, and if so, under which circumstances? The answer to this question is perhaps the most important legislative decision for the structure of the internet as we know it today. While the infamous Section 230 of the Communications Act (“The Twenty-Six Words That Created the Internet”) protects platforms in the US from almost any kind of liability, Europe has taken a different path. Since the introduction of the eCommerce Directive 25 years ago, hosting providers (which include both online marketplaces and online platforms) are only protected from liability for illegal content as long as they have no knowledge of the content. As soon as they are notified, they must act – otherwise they become liable for the content (“notice-and-takedown”, Art. 14 eCommerce Directive, now Art. 6 DSA). Both the eCommerce Directive and the DSA also emphasize that providers cannot be subject to a “general obligation to monitor” the content published on their sites (Art. 15 eCommerce Directive, now Art. 8 DSA). The importance of this so-called “liability privilege” cannot be overestimated. Because platforms are not responsible for the content of third parties at the moment of publication, we have an internet where, in principle, everyone can express their opinion and share creative content without prior review—with all the advantages and disadvantages that such broad and initially uncontrolled participation in public discourse entails.

In Russmedia, however, the liability issue is now being approached from another side: instead of examining the (traditional) responsibility for content in accordance with the provisions of the eCommerce Directive and the DSA, data protection law is being used as a starting (and end) point. The ECJ first (convincingly) establishes that the applicant’s information—i.e., her photo and telephone number, among other things—constitutes personal data within the meaning of the GDPR, regardless of whether its untrue in nature (paras. 47-53). Since the false claims relate to the (alleged) sex life of the person concerned, they are also “sensitive data” within the meaning of Art. 9 GDPR, which is subject to special protection. The unlawfulness of the data processing, specifically the publication of (inaccurate) sensitive data about the data subject without her consent, is also unproblematic (paras. 80-84).

It is, however, substantially more difficult to determine who’s the “controller” for this unlawful data processing in terms of data protection law. The key distinction which has to be made is between “controllers” (who decide on the “purposes and means” of data processing, i.e., process data in their own interest, Art. 4 (7) GDPR) and “processors” (who process data on behalf of someone else, Art. 4 (8) GDPR). The unknown user who created the advertisement clearly clarifies as controller (para. 64). However, in the opinion of the ECJ, Russmedia (jointly with the unknown user) also qualifies as a controller. By granting itself extensive rights to the content created in its terms and conditions (including use, distribution, reproduction, modification, removal, and transfer to third parties), the company “can exploit those data for its own advertising and commercial purposes” (para. 67-68), and, thus, does not only process the data on behalf of the user. Furthermore, the mere provision of an online marketplace as such leads the ECJ to the conclusion that Russmedia has had an impact on the “means” of that publication, in particular by influencing parameters for the dissemination, such as the target audience, the presentation, and the duration of the advertisement (paras. 70-73). Hereby, the ECJ is further developing its established jurisprudence on the joint responsibility of content creators and online platforms/marketplaces under data protection law (see in particular Wirtschaftsakademie Schleswig-Holstein).

GDPR trumps the eCommerce Directive

The Court therefore finds (convincingly up to this point) that Russmedia is responsible for the unlawful processing of personal data created by the unknown user. While the ECJ’s further comments on identification obligations for online users (paras. 77-106) and measures necessary to prevent the copying of personal data (paras. 107-126) require their own critical assessment elsewhere, the focus of this article will be on the consequences of the responsibility for data protection violations (under the GDPR) for the general liability regime for online platforms (within the meaning of the eCommerce Directive, now the DSA). In this context, as the ECJ notices, “[t]he question therefore arises as to the relationship between those two instruments of EU law” (para. 128). In a mere ten paragraphs, the ECJ then revolutionizes the established understanding of liability for online content, apparently without being aware of the implications of its assessment.

The ECJ starts by reiterating that both the GDPR (Art. 2(4)) and the eCommerce Directive (Art. 1(5)(b), now Art. 2(4)(g) DSA), mutually emphasize that they are “without prejudice” to the application of or, respectively, “shall not apply to” questions relating the other legal act (paras. 129-133). However, in the present case, the outcome under the GDPR (liability for unlawful data processing from the moment of publication on) and the eCommerce Directive (liability privilege as long as the hosting provider has not been notified) are clearly in conflict. The fact that the legislator stipulates that two legal acts do not apply to each other’s legal questions, even though they do in fact heavily interfere with each other, obviously puts the judiciary in a difficult situation. One would expect that these legal texts are now examined intensively using various methods of interpretation, reviewing general legal principles of the relationship between lex specialis and lex generalis, and taking into account relevant primary law, in particular the Charter of Fundamental Rights.

The ECJ, however, took a different approach: it succinctly states that the GDPR being “without prejudice” to the liability rules of the eCommerce Directive merely means that an operator is “not automatically preclude[d] from being able to rely on [the liability rules] for matters other than those relating to the protection of personal data” (para. 134). The content of Art. 2(4) GDPR is therefore limited to a declaratory repetition of the scope of application of data protection law. Only if the GDPR is not applicable, there is room for other legal regimes. Thus, the liability privilege of the eCommerce Directive does not apply to GDPR violations, and Russmedia is liable for the unlawful advertisement published by its (unknown) user (paras. 135-136).

A comprehensive filter regime

This is a bombshell. It reduces the scope of the liability regime for online marketplaces and platforms to an absolute minimum. As soon as content, be it a comment, an image, or a video, contains personal data, the online service becomes liable for it from the moment of publication. Such content containing (false) personal information, be it defamation, deepfakes, or other violations of personal rights, has always been a major focus of the debate on platform responsibility in recent years (including fundamental ECJ rulings such as Glawischnig-Piesczek). In all of these cases, the liability regime of the GDPR is now “overruling” the liability privilege. Consequently, instead of notice-and-takedown, it is publish-and-perish which applies to online platforms. If a platform allows its users to publish illegal content, it becomes liable.

How does this look in practice? In order to avoid the risk of becoming liable under the GDPR, platforms must install comprehensive AI-supported filter systems that identify (potentially) illegal content from the wide variety of different statements expressed by users and prevent its publication. Thereby, the obligation to implement “technical and organizational measures” (TOMs) to ensure the lawfulness of data processing under Articles 24 and 25 GDPR becomes a “general monitoring obligation” (for the questionable understanding of TOMs as an “identification obligation” for users, see paras. 85-106 Russmedia). Under these circumstances, content which was first deemed legal but is later assessed as illegal poses legal risks for services, leading to significant incentives for online platforms to prevent the publication of legal content in cases of doubt (over-blocking).

It is precisely this danger of suppressing legitimate contributions that brought tens of thousands of people onto the streets in 2019 in the context of the EU’s copyright reform to protest against “upload filters”. The provision in question contained a restriction of the liability privilege for “online content-sharing service providers”, such as YouTube, with regard to copyright-protected content, unless they had “made, in accordance with high industry standards of professional diligence, best efforts to ensure the unavailability of specific works” (Art. 17(4)(b) DSM Directive). The ECJ emphasized that this open, technology-neutral wording ultimately constitutes an obligation to use automated detection and filtering systems for “prior review” (para. 54, Poland v. Parliament/Council). If such a filtering system were to lead to the “blocking of lawful communications”, it would be incompatible with the freedom of expression and information enshrined in Art. 11 of the Charter of Fundamental Rights (para. 86). In this regard, it also refers to its case law in Glawischnig-Piesczek and reiterates that rights holders (in copyright law) or the respective person (in the case of violations of personality rights) must specify the relevant content in such a way that it does not “require an independent assessment” by the service provider (paras. 89-90).

Back in 2022, in response to Poland’s action for annulment, it took the ECJ 41 paragraphs, an extensive consideration of the comprehensive set of procedural safeguards laid out in the Copyright Directive, and an interpretation of the ordinary law in accordance with fundamental rights to confirm the compatibility of an explicit legislative decision for “upload filters” with primary law. Now, in Russmedia, the ECJ does not test the compatibility of such a filter regime with fundamental rights at all. If European data protection (!) law is indeed to be interpreted as providing for comprehensive monitoring and filtering of communication– indirectly and without any indication that this was the intention of the legislator – the ECJ should have examined in detail whether these provisions are compatible with the freedom of expression and information of the users affected by the filtering systems. Instead, the ECJ simply states, without any further explanation or justification, that compliance with the obligations under the GDPR cannot be classified as a “general monitoring obligation” (para. 132). Thereby, the ECJ has missed the opportunity to clarify the relevance of the liability privilege (introduced by ordinary law) for the exercise of fundamental rights (and further develop the line of jurisprudence developed in Poland v. Parliament/Council), while at the same time reducing legal uncertainty regarding the vague exceptions for “special monitoring obligations”. Instead, the unsettling impression remains that the ECJ was not even remotely aware of the significance of its decision.

A new line of case law is in its infancy

After reading this ruling, one is left somewhat puzzled as to what the state of liability of hosting services for third-party content in the EU is. Although the decision directly only concerns an “online marketplace” and by now replaced provisions of the eCommerce Directive, the ECJ’s reasoning does not contain any restriction according to which these principles would not be directly applicable to online platforms and liability under the DSA. One needs to keep in mind that these rules not only affect Big Tech services like X (formerly Twitter), Facebook, and TikTok, but also, in principle, every niche and common good-oriented online platform. From very large online platforms, such as Wikipedia, to smaller Mastodon instances, the impact is likely to be quite significant. Without any limitation of liability, these services would be threatened in their existence.

There are two entry points for more restrictive interpretations of the judgement in the future: First, the requirements for joint responsibility under data protection law (paras. 66-67, to what extent does this line of argumentation, for example, apply to non-commercial services without algorithmic curation?); and second, the manifest unlawfulness of the content and its harmful nature in the specific case (see paras. 39-40 on the questions referred by the referring court). Furthermore, the ECJ has also established notice-and-takedown-like procedures in other decisions in data protection law (Google Spain, see also BĂ€cker in BeckOK Datenschutzrecht, 54th ed., Art. 2 GDPR, para. 35).

Germany’s Federal Court of Justice (BGH) now has the opportunity to contribute to the development of this new line of case law: For more than three and a half years, Renate KĂŒnast, a former German minister, and Facebook have been arguing before various courts about Facebook’s obligation to prevent the repeated publication of a misquote of KĂŒnast on the platform. While the Regional Court and Higher Regional Court of Frankfurt am Main had still regarded this as a question of the interpretation of the liability provisions of the eCommerce Directive, the BGH recognized the significance of data protection law and suspended the proceedings to wait for the decision in Russmedia. It remains to be seen whether the BGH will now be satisfied with the ECJ’s argumentation or whether it will offer the Court the opportunity to rectify the decision by asking further questions about its interpretation.

The post A General Obligation to Monitor appeared first on Verfassungsblog.

Patchwork Policing

Across Europe, police forces are gradually acquiring powers to deploy artificial intelligence (AI). In November 2025, the federal states of Baden-WĂŒrttemberg and North Rhine-Westphalia amended their state police laws to enable or expand the use of the US intelligence software Palantir – and thus triggered debate on AI use. In neighbouring France and Luxembourg, legislative debates have not yet escalated to the full regulation of automated data analysis, with both countries prioritising the authorisation of AI-supported video analysis in public spaces.

While the regulatory details may differ, the underlying dynamic is the same: legislatures are progressively expanding AI-assisted police powers without a coherent regulatory concept, exposing fundamental rights to uneven and unnecessary risks. Yet the use of AI by the police creates a variety of threats to the privacy and personal data protection of those (unwittingly) affected. There is also concern that new technologies, which law enforcement agencies do not control alone, will increase dependence on companies with sometimes questionable reputations. A joint regulation of police AI could both mitigate these risks and send a strong signal of Europe’s independent, rights-centred path distinct from that of the US.

Constitutional requirements for the use of automated data analysis

In Germany, the states of Hesse (in 2018 with Section 25a HSOG) and Hamburg (in 2019 with Section 49 HambPolDVG) were the first to establish powers for automated data analysis in their police laws – and constitutional complaints were filed against them. In February 2023, the BVerfG ruled on Hamburg (1 BvR 2634/20) and Hesse (1 BvR 1547/19), establishing the requirements for police powers to perform automated data analysis (see here and the following references). In doing so, the court demonstrated in an almost textbook manner that the severity of the interference with fundamental rights by automated data analysis can vary depending on the legal framework (para. 75 ff.). The legal requirements for constitutionally compliant data analysis must correspond to the respective severity of the interference, which the challenged provisions did not do.

This requires the interaction of various factors to be taken into account, such as the type and scope of the data involved (para 79 et seq.) and the methods of analysis and evaluation used (para 91 et seq.), which can both increase or reduce the intensity of the interference. Therefore, the legislature must, in particular when it comes to the prevention of crimes that do not yet pose a concrete threat, “lay down the essential principles for limiting the type and scope of the data and the processing methods themselves by law” (para. 112). The court thus laid the foundation for further constitutional debate on data analysis using artificial intelligence and, in particular, the controversial Palantir software.

Legislative reforms in Germany and their shortcomings

New momentum has now been brought to the debate by legislative reforms in the two federal states of Baden-WĂŒrttemberg and North Rhine-Westphalia, both of which recently amended their state police laws to enable or expand the use of Palantir. When comparing the new provisions in Section 47a PolG BW and Section 23 (6), (6a) PolG NW, it is interesting to note that the two federal states take very different approaches. Baden-WĂŒrttemberg opted for a graduated approach, specifying which data categories are excluded, included, or, if necessary, added as supplementary information to the analysis. Data from residential surveillance and online searches – generally considered the most serious interferences with fundamental rights in police data collection law – are excluded by the law. Regularly included are case processing data (criminal complaints, investigation reports, notes with data from informants and witnesses), case data (mainly data on persons involved in criminal investigations and their contacts), and data from police information systems. In addition, telecommunication traffic data (e.g. who called whom, location data), data from evidence, data from non-state databases or from separately maintained state registers, and data from separately stored internet sources may be included on a supplementary basis, where necessary, in individual cases. The inclusion of traffic data from cell site queries and the content of monitored telecommunications is only permitted in cases involving a concrete or specific danger (Section 47a (3) PolG BW).

North Rhine-Westphalia, on the other hand, refrains from such detailed regulations and merely stipulates that data from residential surveillance or online searches may only be included if this is essential to avert a present danger to the life, body or liberty of a person (Section 23 (6a) Sentence 4 PolG NW). Otherwise, the state law does not contain any provisions. It even allows the use of self-learning systems, which had previously been prohibited, thereby further interfering with fundamental rights. With such carte blanche for automated data analysis, the parliament in North Rhine-Westphalia leaves it to the administration to regulate the essential requirements for the use of AI in internal guidelines. This does not comply with the constitutional requirements outlined above. This comparison shows how differently two states deal with the risks identified in the use of AI by the police, despite clear guidance by the Federal Constitutional Court. A look at other European countries confirms that legislation on police AI seems to be shaped more by current political events rather than by a coherent legal approach.

Police Use of AI in France and Luxembourg

The legislative frictions across Germany reflect a political will to make automated data analysis fit within constitutional boundaries. Although such political effort has not yet been observed in France and Luxembourg, recent legislative developments in both countries show a determination to integrate AI tools into policing – albeit through a narrower regulatory focus on automated video analysis.

France moved particularly fast in regulating police AI uses against the backdrop of a sustained terrorist threat that can be traced back to major attacks such as the 2015 Bataclan massacre. Ahead of the 2024 Olympic and Paralympic Games, the French legislature adopted the loi n° 2023-380 du 19 mai 2023 (“Loi JOP 2024”), which, among several exceptional security measures, authorised experimentation with algorithmic video-analysis for the purpose of detecting predefined security-relevant events in public spaces such as abandoned objects, unusual crowd movements, intrusions into restricted areas, or fire outbreaks.

This legal framework limited the experiment to image data only, explicitly excluding biometric identification and confining processing to a closed list of safety-related events. Although conceived as a short-term, narrowly circumscribed experiment, political momentum quickly shifted: both the Paris Police Prefecture and members of the government publicly endorsed extending the experiment until the end of 2027. The Constitutional Council blocked the extension, but only on procedural grounds, without examining the substance of the measure (unlike the German Federal Constitutional Court). In fact, it held that the extension amounted to a cavalier lĂ©gislatif (a “legislative rider”), as the initial law was adopted with the accelerated process and its extension was inserted into a bill with which the former had no connection. Crucially, the Council did not rule out a permanent application of algorithmic video analysis, thereby allowing the gradual normalisation of overly intrusive police powers.

Luxembourg has taken a more restrained approach, which reflects both the reportedly limited use of AI tools by the police and the country’s relatively low levels of complex crime. Its debate surrounding amendments to the 2018 police law focused on authorising the use of video surveillance systems in public spaces. The new article 43ter enabled the police, with the authorisation of the Ministry of Internal Security, to capture incidents in predefined zones, including through zooming and automated analysis.

As in France, the authorised processing is limited to image data expressly for prevention, detection and prosecution purposes. Yet supervisory authorities warned that these statutory limits were not sufficient to prevent broader interferences with fundamental rights. Both the National Data Protection Authority and the Consultative Commission of Human Rights raised concerns (here and here, respectively) about e.g. the absence of explicit purpose limitation tied to police missions, the lack of detailed criteria for designating surveillance zones, as well as the fact that the text regulates camera deployment but not the software operating them, leaving room for later functional upgrades (including AI-supported analysis) without renewed authorisation. While the Parliament addressed only part of this criticism, the adopted text remains flexible to accommodate future technological developments, signalling the political interest in equipping police with new powers and the use of AI-driven tools.

EU dynamics vs. Member State practices

The examples of France and Luxembourg show that the lawful scope of AI use by the police is so far determined by the interplay between targeted application of narrow frameworks and constitutional review – often in fragmented ways that fall short of human rights standards. However, examples from the German states also show that even well-founded rulings by the constitutional court do not automatically lead to uniform legislative practice. It seems that it is the task of national parliaments to strike an appropriate balance here.

At the national level, debates remain relatively hesitant and inconsistent, mainly because the implications of the EU AI Act for law enforcement are still unfolding. Once fully implemented, the Regulation is expected to clarify the conditions under which commercial and in-house tools may be used by the police, while prompting Member States to revisit their domestic legal frameworks. Eurojust’s recent mapping confirms that currently, national approaches differ among Member States. Some favour the adoption of new statutory laws, others lean towards non-binding guidelines, while a third group considers existing laws sufficient to absorb AI-driven policing. The fragmentation is not surprising given the current lack of concrete EU-level guidance. The only exception is the European Parliament’s 2021 Resolution on AI in criminal law and its use by the police and judicial authorities in criminal matters, which, though useful, remains non-binding and predates the AI Act. The Commission’s forthcoming Guidelines (as per Art. 6 (5) EU AI Act) on high-risk AI systems, expected in February 2026, may therefore play a significant role in limiting divergence among Member States.

In any foreseeable scenario, EU data protection laws – the General Data Protection Regulation and particularly the Law Enforcement Directive – will remain central, already providing a harmonised framework for processing personal data by law enforcement authorities. As such, they are solid ground for safeguarding the rights to privacy and data protection. We should closely observe if this ground will remain solid once the AI Act becomes fully operational, and as the saga of the Commission’s recently introduced (and critically seen) Digital Omnibus Package concludes.

The post Patchwork Policing appeared first on Verfassungsblog.

The Battle over the Sacred and the Profane

Sexual and reproductive rights in Europe are increasingly part of an intense struggle. This includes legal contestation through litigation and third-party interventions at, in particular, the European Court of Human Rights. It is however important to recognize that contestation also takes place in other, political and public, arenas. Interconnected actions, forming part of a broader European conservative right mission, includes political and legal action in many other arenas, including in the European as well as national parliaments.

This struggle is about a political and religious backlash to a largely secular, progressive cultural and human rights revolution. It confronts opposing sides of (transnational) civil society, who both make moral, “sacred” claims, while profaning the opponent. Here, I will first discuss the European conservative right’s mission, the sacred dimensions to this mission, and its increasingly dense transnational network. I will then exemplify cases of struggle by turning to initiatives both on the European level (the promotion of a right to abortion as part of the European Charter and the ECI campaign My Voice, My Choice) and domestic parliamentary debates (the Netherlands).

The European Right’s “sacred” mission

Struggles around sexual and reproductive rights pit more liberally, progressive-oriented or “frontlash” actors against other, including non-liberal, often radical-conservative “backlash” organizations. In the actions of the latter, religion is an explicit and core dimension. The European Right – linking a variety of right-wing populist actors with radical, religious-conservative ones – is active on various fronts in order to promote an alternative vision to what are often indicated as “woke liberalism”, ”progressive ideology”, “gender ideology”, and the alleged European liberal hegemony. The supranational project of European integration and its complex human rights regimes, both in terms of the European Union and the Council of Europe, are a core target of these groups.

The European Right’s “sacred” mission is grounded in religion and religious claims. Religion – in the form of distinctive interpretations or utilisations of Christianity – is of strategic value and is instrumentalised in variegated courses of action. It forms the background for proposals for fundamental reform of the European institutions, it is used as a justification for strengthening national sovereignty, it serves as a fundamental value basis for contesting progressive rights promotion, and it provides a key legitimation for the restriction of rights on the domestic level. Regarding rights, there are roughly five areas where radical-conservative counter-movements are predominantly active, in particular in terms of third-party interventions, but not only: a) Right to family, parental authority; b) Sexual/gender identity; c) Reproductive practices; d) Euthanasia, and e) Freedom of expression. In recent years, these areas have become increasingly contested.

The sacred and the profane

The argument here follows a cultural- and political-sociological approach, and is inspired by Durkheim and later sociologists building on his work. From this sociological perspective, radical-conservative actors seek to construct an alternative to liberal understandings of rights, by the profaning or desacralising of what they see as hegemonic, liberal understandings of rights. Contemporary “backlash movements” put the hegemonic sacred (etym. “sacer, holy, dedicated to a god”) and profane (etym. “outside of the temple”) distinctions on their heads, by criticising “sacred” civil, liberal characterisations of rights – such as the liberal emphases on universalism, individualism, equality, and emancipatory rights extensions for minority groups – and turn them into profane – i.e. polluted, impure – ones (as promoting hyper-individualism, endorsing non-natural, “deviant” forms of behaviour that defy “natural” ones). In this, radical conservatives claim the status of victims for those who hold religious, that is, Christian views.

Radical-conservative actors might be understood as heterodox movements, in that they contest the alleged hegemony of secular, liberal understandings of rights, and their main forms of institutionalisation. One often repeated argument from the radical-conservative right is that liberalism undermines the religious dimensions of societies. In this, they lay claim to their “sacred”’ commitments (“deeply held values that are non-negotiable”) and the sacrality of their positions, which denies such “sacred” status to the positions of the opposition (including liberal, pro-choice standpoints).

What is ”sacred” or “absolute” is expressed in recurrent claims in both judicial and political contexts. This includes an insistence on subsidiarity and national sovereignty, not least to protect national value (Christian) communities from European intervention. The radical-conservative right further stresses (“sacralises”) the collective over the individual, for instance, in terms of “sacrificial motherhood” (the subjection of the role of the mother to the “needs” of society, including in demographic terms), relating children’s rights and the status of the family to the best interest of the whole society, claiming euthanasia is not a strictly private matter, or safeguarding the majority’s (religious) feelings against blasphemous statements by individuals in the public sphere.

The networked European right

The “sacred” mission of radical-conservative actors is transnationally organised in various networks. One instance is a network called “Agenda Europe”, which has links to various radical-conservative actors that engage in political and legal mobilisation. In its key statement, Restoring the Natural Order (original version: 20141)), the religious, sacred dimension is justified through natural law, strongly endorsed as an antidote to the “Cultural Revolution” of the 1960s which has allegedly led to a “process of de-civilisation”. Natural law is put forward as a civilising force, while human rights are profaned as at best a pseudo-religion: “human rights documents are no absolute truths, but the outcome of a political process”. Natural law is instead ”independent of politics, or of the human will”. In fact, “[t]here is a Natural Law, which human reason can discern and understand, but which human will cannot alter” (italics added). In relation to the right to abortion, the preface of the document states that “[t]he culture of life associated with Christianity has been largely abandoned and replaced by a veritable ‘culture of death’, which, out of inner necessity, will destroy from within any society that accepts and allows it”.

A right to abortion in the EU Charter

Understood in Restoring the Natural Order as an “encouraging” recent development, one clear point of rupture in relation to the right to abortion is the reversal of the Roe v. Wade judgment (1973) by the United States Supreme Court, in Dobbs v Jackson Women’s Health Organisation (2022). In this judgment, the Supreme Court pushed the right to abortion into a more restrictive, conservative direction by rejecting abortion as a constitutional right and leaving authority to regulate to individual states. This constitutes a major turning point in the US, but equally provoked a reaction on the other side of the Atlantic, prompting attempts to safeguard achievements around the right to abortion in European states (culminating for instance in France in the constitutionalisation of the right to abortion in 2024).

On the European level, it mobilised political forces in the European Parliament to adopt a resolution that called for the recognition of the right to abortion in the European Charter of Fundamental Rights, and which explicitly stated it acted against a pushback on gender equality and SHRH [sexual and reproductive health and rights] backsliding and to constitutionally protect the rights that are under attack. In the related parliamentary debate, the initiators (of Renew) called for the entrenchment of the right to abortion in the European Charter, while opposing, right-wing actors claimed that the European Union should defend the right to life as well as children’s rights, and not promote a (profane) “culture of death”.

Rights contestation in domestic arenas

The campaign for a European right to abortion equally triggered reactions in domestic arenas. Let us take as an example the Netherlands, a country that until recently was considered a pioneer in the advancement of progressive rights. Here, two motions, initiated by the conservative-Calvinist SGP, and supported by radical-conservative and populist parties, were adopted by the Dutch parliament in March 2025. The government was asked to evaluate the consequences of the abolition of a 5-day period of reconsideration for women who want to abort, as of January 1, 2023. The second motion asked to anticipate the evaluation regarding abortion procedures (currently planned for 2028). While for the SGP the motions were to investigate into an explosive increase in abortions, according to the centre for sexual expertise “Rutgers”, the two motions could have negative implications for the right of self-determination of women.

In reaction, Dutch left-progressive actors put forward a parliamentary motion for the recognition of a right to abortion in the European Charter of Fundamental Rights as well as in the UN Covenant of Civic and Political Rights in September 2025. The intention was to safeguard (“sacralise”) the right to abortion in a world in which it is increasingly endangered. This motion provoked a further counter-move by conservative, religious political groups, to urge the government to prevent the adoption of the right to abortion in European treaties (insisting on the national prerogative to regulate abortion). According to them, countries ought to retain the sovereign right to regulate abortion in ways they see fit, while the EU allegedly is trying to impose its (profaning) values on member states in areas such as marriage, sexuality or abortion.

My Voice, My Choice Initiative

Returning to the European level, the European Citizens’ Initiative My Voice, My Choice was equally a reaction to the developments around Roe v Wade in the US, as well as to the situation in certain European countries with de jure or de facto restrictions on abortion. The ECI managed to collect over a million signatures, meaning it was successful. On 2 December, the European Parliament held a hearing with the My Voice, My Choice campaigners. And on 17 December, the Parliament voted – with 358 votes – in favour of a related motion.

The visibility of the campaign provoked a clear reaction from radical-conservative forces. In preceding months, Agenda Europe had claimed on its blog that it is “in fact a resounding defeat for the abortion lobby”, not least because the earlier “diametrically opposed” ECI One of Us gathered 1.7 million signatures in 2014. One of the promoters of this ECI claimed that “[t]his result proves once again that Europe is pro-life at its core”. The ECI depicted the liberal-progressive position in profane, impure terms, denouncing abortion as “prenatal child murder”, a call for EU-funded “abortion tourism”, and a “normalisation of baby-killing”, while understanding human dignity in the sacred terms of including the dignity of all human beings, ostensibly including “children in utero”. In the Netherlands, the pro-life organisation Schreeuw om het leven (Cry for Life) organised a petition campaign to be presented to the Dutch Commissioner Wopke Hoekstra. And in the run-up the December hearings and vote, various counter-events were organised at the EP, such “Real Choice Means Real Support” and “My Voice My Choice: A Legal, Moral and Financial Fraud”. The pro-abortion motion was accompanied 4 other motions against abortion, tabled by radical-conservative right-wing MEPs and party groups, stressing the principle of subsidiarity, the lack of EU competence, respect for national identity, and “motherhood as an essential contribution to society”.

Conclusion

The battle over the sacred and profane is evidently not a new phenomenon in Europe (just think of the debates over the preamble of the European Constitution or the Lautsi v Italy case). What does seem novel is the intensity, visibility, and active engagement in multiple arenas of increasingly well-organised radical-conservative actors, greatly facilitated by an ever more hostile international environment.

References[+]

References
↑1 In a second edition of this document, published in 2024, and made public on the organisation’s website, it is claimed in a new preface that the document was originally intended for private use of the network, and that the document was illegitimately made public by “criminal computer hackers”.

The post The Battle over the Sacred and the Profane appeared first on Verfassungsblog.