This article reports a methods-focused reading of a curated policy corpus that examines how artificial intelligence (AI), when situated within sustainability-oriented regulation, may be associated with shifts in corporate governance toward stakeholder accountability. The analysis combines corpus linguistics (AntConc) with semantic network techniques (InfraNodus) to move from dense legal language toward a transparent account of recurring concepts and their relations. While any single method has limits, the combined approach is intended to provide both textual evidence and a system-level view, so that substantive claims remain cautious and testable.
Methodology: AntConc and InfraNodus complementarities
The corpus comprises six (+1 text annex) open-access instruments that, taken together, appear to structure contemporary discussions of AI and sustainability governance across jurisdictions: the EU AI Act, the Corporate Sustainability Reporting Directive (CSRD), the U.S. SEC Climate-Related Disclosure Rule, the EU Green Deal, the OECD AI Principles, and the ASEAN Guide on AI Governance and Ethics. Source PDFs were converted to UTF-8 text, running headers and pagination were removed, article and annex references were preserved, hyphenation was normalized, and domain acronyms (AI, ESRS, SEC, CSRD) were retained to reduce semantic loss.
AntConc supports the “text evidence” side of the pipeline. Collocation analyses (symmetric ±15 tokens, log-likelihood, minimum frequency and cross-document thresholds) were used to estimate associative fields around terms such as disclosure, assurance, risk, stakeholder, and artificial intelligence. The clusters/N-grams module (two- to five-grams, with a frequency floor) surfaced doctrinal phrases, double materiality principle, high-risk AI systems, undertakings shall disclose, greenhouse gas emissions, sustainability reporting standards, while keyword-in-context (KWIC) was run on the raw corpus to preserve legal syntax. These steps do not “prove” governance change, but do provide replicable patterns and quotable lines that may be probative for subsequent empirical work.
InfraNodus adds what can reasonably be called a structural view of the discourse, texts are represented as co-occurrence graphs, topic communities are detected, and centrality metrics (e.g., degree, betweenness, eigenvector) help indicate which terms function as hubs or bridges across topics. To reduce preprocessing sensitivity, three specifications were compared, English stopwords plus a domain stoplist to suppress boilerplate, English stopwords only, and a legal-sensitive variant that keeps function words likely to form legal formulae while retaining the domain list. No hand-pruning of nodes or edges was applied, and exports were saved for each run to support reproducibility. This description accords with how InfraNodus itself characterizes its method, semantic text networks with community detection and centrality measures to reveal structure.
Triangulation is used to temper over-interpretation. Where InfraNodus ranks disclosure, risk, financial, or climate as high-betweenness bridges, AntConc collocates and KWIC are checked to see whether those terms typically appear in obligation-bearing or control-oriented contexts rather than incidental mentions. Conversely, when legally salient phrases appear peripheral in the network (often because function words are filtered), the legal-sensitive specification and KWIC help verify their presence. On balance, patterns observed here suggest, rather than definitively establish, a cross-jurisdictional emphasis on disclosure, risk, and assurance as anchors of governance, with AI-related vocabulary more often appearing as a connective layer between compliance-oriented clusters and technology or systems language. Transparency and accountability terms may look more peripheral in some texts and specifications, this, however, could reflect genre and drafting style as much as policy intent, and should be interpreted cautiously.
The article introduces the AI–Policy–Governance Nexus as a conceptual scaffold, regulatory pressure may encourage AI integration (e.g., compliance automation, ESG-risk analytics, traceability, audit trails), which may support shifts in governance practices toward stakeholder-oriented accountability, over time, such shifts might contribute to strategic resilience. This is a theorized pathway, not a causal estimate. It is best treated as a set of operational hypotheses, linked to concrete clauses (e.g., AI technical documentation and post-market monitoring, ESRS-based disclosures and assurance pathways), that invites firm-level testing rather than presumes firm-level behavior.
Reproducibility and provenance are emphasized to enable independent scrutiny. Preprocessed texts and analysis exports (AntConc outputs and InfraNodus graph data) are available as open data on Zenodo (10.5281/zenodo.17095355). The methodological stance is deliberately conservative, parameters are reported, opaque graph pruning is avoided, and structural observation is paired with textual evidence where feasible. Even so, readers should assume that results can vary with corpus composition, preprocessing, and parameterization, alternative specifications may yield partially different emphases, especially for multi-word legal phrases and cross-jurisdictional terminology.
Finally, portability may be a pragmatic strength of this approach. Teams concerned with assurance readiness, regulatory gap analysis, or stakeholder-risk mapping can adapt the pipeline to their own corpora and publish artifacts for audit. As with any text-analytic method, triangulation with qualitative reading and, where possible, organizational data is advisable before drawing policy or strategic conclusions.
APA reference to the article
Cordeiro, C. M., Adomaitis, L., & Huang, L. (2026). The AI-policy-governance nexus, How regulation and AI shift corporate governance toward stakeholders. Technology in Society, 84, 103117. https://doi.org/10.1016/j.techsoc.2025.103117
Funding acknowledgement
This study is part of a larger project funded by the European Union under the Horizon Europe research and innovation programme, titled AIOLIA, Operationalizing AI Ethics for Learning and Practice, A Global Approach (Grant Agreement No. 101187937).