Demystifying AI Governance: A Guide for Boardroom Decision-Makers
Boards must grasp
the intersection of AI and governance, crucial
for navigating the transformative impact of
AI on businesses.
Boards need to
prioritize risk management in AI, focusing on
privacy, ethics, and legal compliance to mitigate
potential pitfalls.
AI governance faces
challenges in global cooperation and adapting
to evolving regulatory frameworks, requiring
nuanced approaches for compliance and accountability.
Boards play a crucial
role in ensuring responsible AI adoption by
aligning strategies with business goals, prioritizing
focus areas, and fostering transparent communication.
Introduction:
The simulation of human intelligence processes
by machines, particularly computer systems, is known
as Artificial Intelligence (AI).
Expert systems, natural language processing, speech
recognition, and machine vision are a few specific
uses of AI1. John McCarthy offers the
definition of AI “It is the science and
engineering of making intelligent machines, especially
intelligent computer programs. It is related to
the similar task of using computers to understand
human intelligence, but AI does not have to confine
itself to methods that are biologically observable”.2
In its most basic form, artificial intelligence
is a field that solves problems by fusing computer
science with large, reliable datasets. It also includes
the machine learning and deep learning subfields,
which are often referenced when discussing artificial
intelligence. These fields are made up of AI techniques
that aim to develop expert systems that can classify
or predict things based on input data.
Boardroom Strategies for AI
Governance:
AI has emerged as a transformative force, permeating
various industries and reshaping the business landscape.
As organizations increasingly integrate AI into
their operations, it becomes imperative for corporate
boards to not only understand the technology but
also actively govern its implementation.
The intersection of AI and governance has become
a critical focus for organizations worldwide. As
technology advances, businesses are increasingly
leveraging AI to enhance productivity, strategy,
and overall operations. However, with these opportunities
come significant risks that need to be carefully
managed. There is a need to strike the right balance
between the tremendous upside of AI and the potential
pitfalls that can arise. This article, sheds light
on the crucial aspects every board member should
be aware of in order to effectively govern their
organizations in the age of AI.
Embracing Lifelong Learning:
There is a need for board members to
become lifelong learners. Despite their years
of experience, board members must adopt a student
mentality, especially in the rapidly evolving
realm of AI. The reluctance to delve into technical
details often stems from a fear of complexity
but embracing a learning mindset is essential
for effective governance. Furthermore, boards
should support management in establishing dedicated
leadership roles and cross-functional teams
responsible for overseeing generative AI initiatives.
By fostering a coordinated approach and prioritizing
high-impact use cases, companies can streamline
deployment efforts and maximize the value derived
from generative AI. Earmarking a budget for
the board educating itself on emerging technologies
that may be key to business, reaching out to
existing advisors such as lawyers or accountants
to hold knowledge sessions, etc. may be good
initiatives for a board with a learning mindset
to adopt. Boston Consulting Group states that
leading companies spend up to 1.5% of their
annual budgets on learning and skill building.3
Key Questions for Governance
Oversight: To bridge the knowledge
gap, every board should include at least one
member well-versed in AI. Additionally, basic
training in AI fundamentals, without delving
too deeply into the technical intricacies, can
equip board members with essential questions.
Board should raise questions ranging from understanding
the data used in AI models to assessing associated
risks, governance mechanisms, potential disruptive
AI solutions, and evaluate the company’s
AI talent pool. Further, boards must evaluate
the company’s readiness to harness generative
AI by assessing its technological capabilities,
talent pool, and cultural disposition towards
innovation and change. Investing in upskilling
existing workforce, attracting specialized talent,
and fostering a culture of continuous learning
and experimentation are essential for staying
competitive in the age of generative AI4.
Evaluating the company’s readiness in
terms of resources and technological capabilities
to adopt AI led practices is also a key question.5
At times, it may be more suitable to outsource
AI capabilities to service providers too.
Leveraging External Resources
for Upskilling: Practical tactics for
board members to deepen their understanding
of AI as well as help the company onboard AI
include leveraging external resources. Some
platforms provide accessible insights into AI
trends, start-up activities, and critical perspectives,
offering an entry point for board members to
connect AI innovations with business solutions.
Furthermore, forging strategic partnerships
and alliances can augment internal capabilities
and facilitate rapid innovation in this dynamic
landscape.
Evolving Landscape of AI Technologies:
Organizations need to emphasize the
evolution of AI over the years, attributing
recent advancements to cloud computing. The
accessibility of computational power and vast
amounts of data has propelled the development
of AI models. From traditional machine learning
to deep learning, and now to generative AI,
the technology has progressed to the point where
it can create content from scratch. Understanding
this evolution is crucial for boards to appreciate
the potential impact and applications of AI
in their respective industries. Understanding
the potential impact of generative AI requires
a strategic outlook that encompasses both immediate
opportunities and long-term implications. Boards
must engage with management teams to assess
how the technology will influence their industry
landscape, business models, and competitive
positioning. By identifying early adopters,
potential disruptors, and emerging trends, companies
can proactively shape their strategies to leverage
generative AI effectively6.
AI and Governance: A 4-Pillar
Approach: At the intersection of AI
and governance, Board needs to understand how
AI ripples through the four pillars of governance:
oversight, accountability, risk management,
and strategy.
AI Strategy
Alignment: Boards are increasingly
presented with AI strategies from their leadership
teams. There is a need to emphasize the importance
of aligning AI strategies with competitive pressures
in the market and internal challenges. The strategy
should have a clear and agile roadmap closely
linked to business outcomes, avoiding the adoption
of technology for its novelty. This may include
budget allocations for hiring AI talent, acquiring
AI technologies, or investing in AI startups
that align with the company's strategic objectives.7
The Board can also define key performance indicators
(KPIs) to measure the success and impact of
AI initiatives.8
Mergers and
Acquisitions: AI strategies often involve
considerations for acquisitions. The board’s
role is to scrutinize these proposals and ensure
they align with the company’s core competencies
and strategic roadmap, considering the competitive
landscape and industry benchmarks.
Focus of Governance
Committee: Within the governance committee,
attention should be given to five key aspects
related to AI: addressing bias in AI models,
ensuring AI applications are not manipulative,
safeguarding against cybersecurity threats,
preventing data poisoning, and staying on top
of regulatory compliance.
Regulatory
and Ethical Compliance: Compliance
with emerging AI regulations is not only ethical
but also a regulatory requirement. Boards must
be well-versed in the AI regulatory landscape
and update their risk management frameworks
accordingly. Continuous communication and updates
from the governance committee on new regulations
are crucial for staying ahead of compliance
requirements.
Managing Risks in AI:
While generative AI promises significant value
creation opportunities, it also presents inherent
risks that cannot be overlooked. Boards need
to ensure that management teams strike a balance
between innovation and risk management. E.g.:
While its innovative to have board meetings
and representations fully through online mode
through video interfaces, there are associated
risks. The risks are even more enhanced due
to advances uses and misuses of AI such as generation
of deepfakes i.e., digitally synthesized media
that may appropriate the identity of an existing
person such as a board member.9 This
entails thorough assessments of privacy concerns,
ethical considerations, security risks, and
potential legal implications associated with
generative AI. Board should outline three pillars
of risk management: people, processes, and technology.
Firstly, organizations should invest in training
personnel handling AI models, restricting access,
and implementing procedures to inspect data
for biases. Secondly, robust processes, akin
to those used in cybersecurity, should be in
place, regularly checking AI models for drift
or other issues. Lastly, leveraging technology
such as automated risk management frameworks
can further enhance the governance of AI.
Governance Frameworks and Compliance:
Board should note that adopting a risk-based
framework is crucial for governing AI effectively.
For example, The European Union’s AI Act,
categorizes AI applications into limited risk,
high-risk, and unacceptable risk categories10.
The risk-based approach necessitates reporting
structures, and organizations should be prepared
for external audits similar to those conducted
for cybersecurity risks.
Challenges in AI Governance:
Ethical Considerations:
One of the foremost challenges in AI governance
is grappling with ethical dilemmas. AI systems,
particularly those powered by machine learning
algorithms, can perpetuate biases present in
the data they are trained on, leading to discriminatory
outcomes. Ensuring fairness and equity in AI
decision-making processes is paramount but challenging,
given the opaque nature of some AI algorithms
and the potential for unintended consequences.
Regulatory
Frameworks: Another challenge in AI
governance is the pace of technological change,
which can outstrip the ability of regulatory
frameworks to adapt. Regulatory bodies worldwide
are grappling with the challenge of crafting
agile frameworks that can adapt to the pace
of technological advancement while safeguarding
against potential risks and abuses. For instance,
the increasing prevalence of deepfakes online
may also increase the risk and regulatory vulnerability
of an unsuspecting organization due to various
fraudulent and misrepresentative acts involving
deepfakes. An employee of a multinational firm
was recently duped by fraudsters posing as the
Chief Financial Officer and his colleagues using
deepfakes on a video call, and tricking the
employee to pay-out amounts equivalent to USD
25.6 million.11 Board members have
to be mindful of hasty adoption led by misconstrued
beliefs that the unevolved regulatory framework
would not be relevant to new age AI practices.
More often than not, the wait for a regulatory
framework to mature towards new technologies
does not preclude them from being applied in
their current form. An organization has to be
mindful of the regulatory risks arising from
points of view of bias, inaccuracy, breach of
privacy, consumer protection, cyber and data
security, IP protection, and quality control.12
Data Privacy
and Security: AI systems rely heavily
on vast amounts of data, raising concerns about
privacy and security. Unauthorized access to
sensitive data, data breaches, and misuse of
personal information are persistent threats.
Governing AI in a manner that upholds data privacy
rights without stifling innovation requires
a nuanced approach that addresses these concerns
while facilitating data-driven advancements.
This should include clear identification of
what data is input into AI enabled systems and
necessary permissions to use such data especially
where it contains sensitive information.
Global Cooperation:
AI governance is further complicated by the
global nature of AI development and deployment.
Varying regulatory standards and cultural norms
across jurisdictions can create inconsistencies
and challenges in ensuring compliance and accountability
on a global scale. Effective governance frameworks
must navigate these complexities and foster
international cooperation to address shared
challenges.13 For multinational organizations,
boards can actively engage in collaborative
projects for development of AI solutions14,
permissive data sharing exercises15
and knowledge transfer of best practices to
be adopted16.
Moreover, there is a delicate balance between
regulation that ensures safety and ethics and regulation
that stifles innovation. Striking this balance is
one of the key tasks for policymakers.
Conclusion:
AI and governance are intricately linked, with
the former offering unprecedented opportunities
and the latter providing the necessary guardrails
to ensure responsible and ethical use. Organizations
must adopt comprehensive risk management frameworks,
stay abreast of evolving regulations, and prioritize
transparent communication. As generative AI continues
to evolve, businesses need to judiciously integrate
it into their operations while proactively addressing
associated risks. The path forward requires a delicate
balance between embracing the potential of AI and
safeguarding against unintended consequences.
Boards of directors play an important role in
shaping the responsible adoption and utilization
of generative AI within companies. By asking the
right questions and fostering a culture of informed
decision-making, boards can empower management teams
to unlock the full potential of generative AI while
upholding ethical standards and mitigating risks.
As the technological landscape continues to evolve,
boards must remain vigilant, adaptive, and committed
to ensuring that generative AI serves the best interests
of all stakeholders.
As AI continues to reshape industries and business
landscapes, board members are integral to guiding
organizations towards responsible and ethical AI
adoption. Aligning strategies with business goals,
prioritizing key focus areas, and embracing transparent
communication are essential steps for effective
leadership in the AI-driven future. By adopting
a comprehensive governance approach, boards can
steer their organizations towards success while
mitigating risks and addressing evolving regulatory
landscapes. Embracing lifelong learning, leveraging
external resources, and understanding the dynamic
nature of AI technologies are crucial for fulfilling
the governance oversight role.
The message is clear – boards will find
it highly advantageous to embrace AI and turbocharge
their governance practices in an increasingly competitive
business environment. The intricate link between
AI and governance necessitates organizations to
adopt risk management frameworks, stay informed
about regulations, and prioritize transparent communication
to navigate the evolving landscape. As generative
AI advances, businesses must judiciously integrate
it into operations while proactively addressing
associated risks, striking a delicate balance between
harnessing AI’s potential and safeguarding
against unintended consequences.
Demystifying AI Governance: A Guide for Boardroom Decision-Makers
Boards must grasp
the intersection of AI and governance, crucial
for navigating the transformative impact of
AI on businesses.
Boards need to
prioritize risk management in AI, focusing on
privacy, ethics, and legal compliance to mitigate
potential pitfalls.
AI governance faces
challenges in global cooperation and adapting
to evolving regulatory frameworks, requiring
nuanced approaches for compliance and accountability.
Boards play a crucial
role in ensuring responsible AI adoption by
aligning strategies with business goals, prioritizing
focus areas, and fostering transparent communication.
Introduction:
The simulation of human intelligence processes
by machines, particularly computer systems, is known
as Artificial Intelligence (AI).
Expert systems, natural language processing, speech
recognition, and machine vision are a few specific
uses of AI1. John McCarthy offers the
definition of AI “It is the science and
engineering of making intelligent machines, especially
intelligent computer programs. It is related to
the similar task of using computers to understand
human intelligence, but AI does not have to confine
itself to methods that are biologically observable”.2
In its most basic form, artificial intelligence
is a field that solves problems by fusing computer
science with large, reliable datasets. It also includes
the machine learning and deep learning subfields,
which are often referenced when discussing artificial
intelligence. These fields are made up of AI techniques
that aim to develop expert systems that can classify
or predict things based on input data.
Boardroom Strategies for AI
Governance:
AI has emerged as a transformative force, permeating
various industries and reshaping the business landscape.
As organizations increasingly integrate AI into
their operations, it becomes imperative for corporate
boards to not only understand the technology but
also actively govern its implementation.
The intersection of AI and governance has become
a critical focus for organizations worldwide. As
technology advances, businesses are increasingly
leveraging AI to enhance productivity, strategy,
and overall operations. However, with these opportunities
come significant risks that need to be carefully
managed. There is a need to strike the right balance
between the tremendous upside of AI and the potential
pitfalls that can arise. This article, sheds light
on the crucial aspects every board member should
be aware of in order to effectively govern their
organizations in the age of AI.
Embracing Lifelong Learning:
There is a need for board members to
become lifelong learners. Despite their years
of experience, board members must adopt a student
mentality, especially in the rapidly evolving
realm of AI. The reluctance to delve into technical
details often stems from a fear of complexity
but embracing a learning mindset is essential
for effective governance. Furthermore, boards
should support management in establishing dedicated
leadership roles and cross-functional teams
responsible for overseeing generative AI initiatives.
By fostering a coordinated approach and prioritizing
high-impact use cases, companies can streamline
deployment efforts and maximize the value derived
from generative AI. Earmarking a budget for
the board educating itself on emerging technologies
that may be key to business, reaching out to
existing advisors such as lawyers or accountants
to hold knowledge sessions, etc. may be good
initiatives for a board with a learning mindset
to adopt. Boston Consulting Group states that
leading companies spend up to 1.5% of their
annual budgets on learning and skill building.3
Key Questions for Governance
Oversight: To bridge the knowledge
gap, every board should include at least one
member well-versed in AI. Additionally, basic
training in AI fundamentals, without delving
too deeply into the technical intricacies, can
equip board members with essential questions.
Board should raise questions ranging from understanding
the data used in AI models to assessing associated
risks, governance mechanisms, potential disruptive
AI solutions, and evaluate the company’s
AI talent pool. Further, boards must evaluate
the company’s readiness to harness generative
AI by assessing its technological capabilities,
talent pool, and cultural disposition towards
innovation and change. Investing in upskilling
existing workforce, attracting specialized talent,
and fostering a culture of continuous learning
and experimentation are essential for staying
competitive in the age of generative AI4.
Evaluating the company’s readiness in
terms of resources and technological capabilities
to adopt AI led practices is also a key question.5
At times, it may be more suitable to outsource
AI capabilities to service providers too.
Leveraging External Resources
for Upskilling: Practical tactics for
board members to deepen their understanding
of AI as well as help the company onboard AI
include leveraging external resources. Some
platforms provide accessible insights into AI
trends, start-up activities, and critical perspectives,
offering an entry point for board members to
connect AI innovations with business solutions.
Furthermore, forging strategic partnerships
and alliances can augment internal capabilities
and facilitate rapid innovation in this dynamic
landscape.
Evolving Landscape of AI Technologies:
Organizations need to emphasize the
evolution of AI over the years, attributing
recent advancements to cloud computing. The
accessibility of computational power and vast
amounts of data has propelled the development
of AI models. From traditional machine learning
to deep learning, and now to generative AI,
the technology has progressed to the point where
it can create content from scratch. Understanding
this evolution is crucial for boards to appreciate
the potential impact and applications of AI
in their respective industries. Understanding
the potential impact of generative AI requires
a strategic outlook that encompasses both immediate
opportunities and long-term implications. Boards
must engage with management teams to assess
how the technology will influence their industry
landscape, business models, and competitive
positioning. By identifying early adopters,
potential disruptors, and emerging trends, companies
can proactively shape their strategies to leverage
generative AI effectively6.
AI and Governance: A 4-Pillar
Approach: At the intersection of AI
and governance, Board needs to understand how
AI ripples through the four pillars of governance:
oversight, accountability, risk management,
and strategy.
AI Strategy
Alignment: Boards are increasingly
presented with AI strategies from their leadership
teams. There is a need to emphasize the importance
of aligning AI strategies with competitive pressures
in the market and internal challenges. The strategy
should have a clear and agile roadmap closely
linked to business outcomes, avoiding the adoption
of technology for its novelty. This may include
budget allocations for hiring AI talent, acquiring
AI technologies, or investing in AI startups
that align with the company's strategic objectives.7
The Board can also define key performance indicators
(KPIs) to measure the success and impact of
AI initiatives.8
Mergers and
Acquisitions: AI strategies often involve
considerations for acquisitions. The board’s
role is to scrutinize these proposals and ensure
they align with the company’s core competencies
and strategic roadmap, considering the competitive
landscape and industry benchmarks.
Focus of Governance
Committee: Within the governance committee,
attention should be given to five key aspects
related to AI: addressing bias in AI models,
ensuring AI applications are not manipulative,
safeguarding against cybersecurity threats,
preventing data poisoning, and staying on top
of regulatory compliance.
Regulatory
and Ethical Compliance: Compliance
with emerging AI regulations is not only ethical
but also a regulatory requirement. Boards must
be well-versed in the AI regulatory landscape
and update their risk management frameworks
accordingly. Continuous communication and updates
from the governance committee on new regulations
are crucial for staying ahead of compliance
requirements.
Managing Risks in AI:
While generative AI promises significant value
creation opportunities, it also presents inherent
risks that cannot be overlooked. Boards need
to ensure that management teams strike a balance
between innovation and risk management. E.g.:
While its innovative to have board meetings
and representations fully through online mode
through video interfaces, there are associated
risks. The risks are even more enhanced due
to advances uses and misuses of AI such as generation
of deepfakes i.e., digitally synthesized media
that may appropriate the identity of an existing
person such as a board member.9 This
entails thorough assessments of privacy concerns,
ethical considerations, security risks, and
potential legal implications associated with
generative AI. Board should outline three pillars
of risk management: people, processes, and technology.
Firstly, organizations should invest in training
personnel handling AI models, restricting access,
and implementing procedures to inspect data
for biases. Secondly, robust processes, akin
to those used in cybersecurity, should be in
place, regularly checking AI models for drift
or other issues. Lastly, leveraging technology
such as automated risk management frameworks
can further enhance the governance of AI.
Governance Frameworks and Compliance:
Board should note that adopting a risk-based
framework is crucial for governing AI effectively.
For example, The European Union’s AI Act,
categorizes AI applications into limited risk,
high-risk, and unacceptable risk categories10.
The risk-based approach necessitates reporting
structures, and organizations should be prepared
for external audits similar to those conducted
for cybersecurity risks.
Challenges in AI Governance:
Ethical Considerations:
One of the foremost challenges in AI governance
is grappling with ethical dilemmas. AI systems,
particularly those powered by machine learning
algorithms, can perpetuate biases present in
the data they are trained on, leading to discriminatory
outcomes. Ensuring fairness and equity in AI
decision-making processes is paramount but challenging,
given the opaque nature of some AI algorithms
and the potential for unintended consequences.
Regulatory
Frameworks: Another challenge in AI
governance is the pace of technological change,
which can outstrip the ability of regulatory
frameworks to adapt. Regulatory bodies worldwide
are grappling with the challenge of crafting
agile frameworks that can adapt to the pace
of technological advancement while safeguarding
against potential risks and abuses. For instance,
the increasing prevalence of deepfakes online
may also increase the risk and regulatory vulnerability
of an unsuspecting organization due to various
fraudulent and misrepresentative acts involving
deepfakes. An employee of a multinational firm
was recently duped by fraudsters posing as the
Chief Financial Officer and his colleagues using
deepfakes on a video call, and tricking the
employee to pay-out amounts equivalent to USD
25.6 million.11 Board members have
to be mindful of hasty adoption led by misconstrued
beliefs that the unevolved regulatory framework
would not be relevant to new age AI practices.
More often than not, the wait for a regulatory
framework to mature towards new technologies
does not preclude them from being applied in
their current form. An organization has to be
mindful of the regulatory risks arising from
points of view of bias, inaccuracy, breach of
privacy, consumer protection, cyber and data
security, IP protection, and quality control.12
Data Privacy
and Security: AI systems rely heavily
on vast amounts of data, raising concerns about
privacy and security. Unauthorized access to
sensitive data, data breaches, and misuse of
personal information are persistent threats.
Governing AI in a manner that upholds data privacy
rights without stifling innovation requires
a nuanced approach that addresses these concerns
while facilitating data-driven advancements.
This should include clear identification of
what data is input into AI enabled systems and
necessary permissions to use such data especially
where it contains sensitive information.
Global Cooperation:
AI governance is further complicated by the
global nature of AI development and deployment.
Varying regulatory standards and cultural norms
across jurisdictions can create inconsistencies
and challenges in ensuring compliance and accountability
on a global scale. Effective governance frameworks
must navigate these complexities and foster
international cooperation to address shared
challenges.13 For multinational organizations,
boards can actively engage in collaborative
projects for development of AI solutions14,
permissive data sharing exercises15
and knowledge transfer of best practices to
be adopted16.
Moreover, there is a delicate balance between
regulation that ensures safety and ethics and regulation
that stifles innovation. Striking this balance is
one of the key tasks for policymakers.
Conclusion:
AI and governance are intricately linked, with
the former offering unprecedented opportunities
and the latter providing the necessary guardrails
to ensure responsible and ethical use. Organizations
must adopt comprehensive risk management frameworks,
stay abreast of evolving regulations, and prioritize
transparent communication. As generative AI continues
to evolve, businesses need to judiciously integrate
it into their operations while proactively addressing
associated risks. The path forward requires a delicate
balance between embracing the potential of AI and
safeguarding against unintended consequences.
Boards of directors play an important role in
shaping the responsible adoption and utilization
of generative AI within companies. By asking the
right questions and fostering a culture of informed
decision-making, boards can empower management teams
to unlock the full potential of generative AI while
upholding ethical standards and mitigating risks.
As the technological landscape continues to evolve,
boards must remain vigilant, adaptive, and committed
to ensuring that generative AI serves the best interests
of all stakeholders.
As AI continues to reshape industries and business
landscapes, board members are integral to guiding
organizations towards responsible and ethical AI
adoption. Aligning strategies with business goals,
prioritizing key focus areas, and embracing transparent
communication are essential steps for effective
leadership in the AI-driven future. By adopting
a comprehensive governance approach, boards can
steer their organizations towards success while
mitigating risks and addressing evolving regulatory
landscapes. Embracing lifelong learning, leveraging
external resources, and understanding the dynamic
nature of AI technologies are crucial for fulfilling
the governance oversight role.
The message is clear – boards will find
it highly advantageous to embrace AI and turbocharge
their governance practices in an increasingly competitive
business environment. The intricate link between
AI and governance necessitates organizations to
adopt risk management frameworks, stay informed
about regulations, and prioritize transparent communication
to navigate the evolving landscape. As generative
AI advances, businesses must judiciously integrate
it into operations while proactively addressing
associated risks, striking a delicate balance between
harnessing AI’s potential and safeguarding
against unintended consequences.
The contents of this hotline should
not be construed as legal opinion. View detailed disclaimer.
NDA Introduction
Want to work with us?
We aspire to build the next generation of socially-conscious lawyers who strive to make the world a better place.
At NDA, there is always room for the right people! A platform for self-driven intrapreneurs solving complex problems through research, academics, thought leadership and innovation, we are a community of non-hierarchical, non-siloed professionals doing extraordinary work for the world’s best clients.
We welcome the industry’s best talent - inspired, competent, proactive and research minded- with credentials in Corporate Law (in particular M&A/PE Fund Formation), International Tax , TMT and cross-border dispute resolution.