
Jay Mehta is COO of Seldon Capital, advising hedge funds on capital raise, ops, hiring & trading.
Regulation has a tendency to sneak up on emerging tech. It starts as guidance, gains teeth in a few key markets, then quickly and suddenly sweeps across jurisdictions. By the time the rules are written into law, the biggest opportunities are often already locked in by early movers. In AI, that moment is happening right now.
Governments around the world are defining what AI can and can’t do, and those decisions are becoming market filters. Investors and operators who misjudge the timing or substance of these rules risk being priced out of key geographies or spending heavily to retrofit products for compliance.
But the details differ widely between jurisdictions. Here’s how three major regulatory philosophies are shaping AI risk and opportunity, and what each means for investment strategies.
Innovation-First Regulation: Fast Lanes With Fine Print
Some governments are betting on growth, actively encouraging AI innovation and development. Many want to be home to the companies building the next wave of systems, and are willing to defer strict oversight until those systems are entrenched.
China has aggressively aimed for AI leadership under its 2017 New Generation Artificial Intelligence Development Plan, and more recently, in 2023, with the Interim Measures. They’ve focused on a two-stage approach: Let innovation run in controlled pilots, then step in with rules once products reach mass adoption, enforcing practices like watermarking and metadata labeling.
South Korea’s AI Framework Act, effective in 2026, will channel public investments in compute infrastructure and even intends to provide datasets for training. While the existing AI Basic Act from last year provided a legal framework, the newest rulings impose oversight on “high-impact” AI systems. SME and startup AI applications could enjoy a permissive environment bolstered by government support.
I think these markets are ideal for infrastructure-heavy bets, experimental platforms and partnerships that benefit from state backing. But innovation-first doesn’t mean unregulated. Monitor adoption thresholds closely since rules can arrive quickly once products gain traction.
Ethics-Driven Regulation: Soft On Paper, Strong In Practice
In some jurisdictions, ethical frameworks carry more weight than formal law, especially when working with enterprise or government clients. Compliance acts as a de facto gatekeeper to the market.
Japan has leaned heavily on soft laws. Its 2019 Social Principles for Human-Centric AI still shape the market today, promoting fairness, sustainability and human dignity as key drivers for AI development. While not legally binding, I noticed they also inform many of the provisions in the country’s more recent AI promotion act. I think we can expect them to shape most board-level discussions and any investor ESG screens.
Singapore released its Model AI Governance Framework, backed by the open-source AI Verify toolkit. Grounded in voluntary but influential benchmarks across nine trust dimensions, the framework is focused on building public and social confidence in the technology before offering any hard rulings.
Saudi Arabia, aiming to become a regional tech hub, introduced their AI Ethics Principles in 2023. These principles provide comprehensive ethical guidance for AI development and deployment across all sectors.
Ethics-led environments may appear softer on paper, but they often impose much more heavy-handed constraints on market entry. Systems that can show explainability and bias mitigation clear the gate more easily, and compliance can serve as a powerful sales strategy.
Risk-Based Regulation: Scale-Sensitive And Still Evolving
Risk-based regimes regulate by potential harm rather than technical design, with requirements scaling up as systems become more consequential.
The EU AI Act, formally adopted in 2024, is the flagship example. It classifies AI into four risk tiers, with high-risk systems subject to rigorous requirements on human oversight and third-party conformity assessments. Unacceptable-risk use cases are prohibited.
The U.S. remains patchy, but state laws are stacking up: California’s Delete Act for AI-powered data brokers, Colorado’s algorithmic transparency rules, New York’s restrictions on AI in hiring, etc. Federal guidance, like the Blueprint for an AI Bill of Rights, has consistent themes (safety, privacy, algorithmic discrimination protections) even without formal federal law in place. For now, investors should expect layered rules per state.
Finally, AI Safety Institutes, first proposed by the U.K. and now seeing adoption across the U.S., Japan, Singapore and the EU, function as hubs for model evaluation and safety benchmarking. These institutes are emerging as influential third-party validators, especially for frontier models targeting government and enterprise buyers.
In risk-based markets, I’ve found modularity is the most important strategic asset. Systems built in layers can be adapted for different jurisdictions without wholesale redesigns or irreversible changes. This flexibility can mean the difference between scaling globally or being locked into a single geography.
What Investors Should Be Doing Now
First, screen for regulatory readiness. In your due diligence, you may want to look for vendors already aligned with demanding frameworks like the EU AI Act or Singapore’s AI Verify. Participation in AI Safety Institutes can also be a strong indicator.
Diversify regulatory exposure. Assume every region you operate in will have a distinct compliance track, and prepare parallel governance stacks tailored to each.
Demand interpretability. Black-box models might score well on benchmarks, but clear, explainable systems are often easier to sell into regulated sectors like finance, education, healthcare and public infrastructure. And once embedded, they’re harder for competitors to replace.
Treat strong governance as a moat. Early public alignment with policy direction builds trust and credibility, and it’s often the fastest route to market access.
The bottom line: AI regulation isn’t following a single global template. That creates friction, but also spaces for investors who can read and anticipate the shifts. Just as GDPR redrew the map for global data businesses, these emerging frameworks will define the next decade of AI market leaders.
The information provided here is not investment, tax or financial advice. You should consult with a licensed professional for advice concerning your specific situation.
Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?