With the enforcement of the AI Basic Act, B2B enterprises must prioritize strategies for ensuring 'transparency'.

,
5–7 minutes

It has been one month since the “Framework Act on the Development of Artificial Intelligence and the Establishment of a Trust-Based Environment” (hereinafter referred to as the “AI Framework Act”) took effect on January 22, 2026. The core principle of this law is that it holds the “end provider”—the entity that provides services to customers—accountable, rather than the entity that developed the technology.

A review of the[Draft of Seven Notifications and Guidelines, Including Guidelines for Ensuring Transparency, Safety, and Reliability] released by the Ministry of Science and ICT reveals that these go beyond mere legal regulations and are closer to service design guidelines aimed at earning customer trust. Marketing copy, service planning, and product design itself are now subject to legal review.

💡 Quick Overview of the Key Points of the AI Framework Act
* Even if you only use an API to provide services to customers, you are subject to legal obligations as an "AI business operator."
* Prior notification before using AI and labeling of AI-generated content are mandatory. These requirements must be naturally integrated not only into the terms of service but also into the service’s user experience (UX).
* For services in sensitive sectors such as healthcare and finance, additional obligations regarding risk management, human oversight, and documentation apply, and a grace period of at least one year is provided.

Even if you provide services by integrating external APIs, you are still considered an "artificial intelligence service provider."

Who is legally liable for AI services?

The key point to clarify in the AI Framework Act is “who bears responsibility.” Even if a company does not develop large language models (LLMs) directly but provides services by integrating external APIs, it is considered an “AI service provider” andis subject to legal obligations as long as it ultimately provides the service to customers.

According to the Ministry of Science and ICT’s [Guidelines for Ensuring Transparency], when an AI technology developer (A) and a service provider (B) utilizing that technology are separate entities, the obligation to ensure transparency falls on Company B, which has direct contact with users. In other words, our company—which communicates with customers and provides services through marketing and sales—is the entity responsible for this obligation.

Specifically, if a CRM SaaS provider has added an “AI email summarization” feature, it is the SaaS company—not the developer of the underlying LLM—that must inform customers of the AI’s use and provide relevant information. Therefore, it is advisable to consult with your internal development and legal teams to determine whether your company qualifies as the “end provider” and to establish an appropriate response framework.

Here’s how to ensure transparency through “notification” and “disclosure.”

The transparency guidelines under the AI Framework Act can be summarized by two principles: “If AI was used, disclose this upfront (prior notice), and ensure that the results are distinguishable (labeling).” This goes beyond a mere legal obligation; it serves as the starting point for designing a user experience that builds trust with customers.

Is it enough to simply include it in the terms and conditions?

“Prior notice” refers to measures taken to ensure that users clearly understand that a feature is AI-based before they interact with the AI. Simply adding a single line to the terms of service is unlikely to be considered sufficient to fulfill this obligation. The noticemust be integrated into the service flow so that customers can naturally recognize this information at every point of contact where they use AI features.

For chatbot-based services, it is effective to clearly state that “This conversation is conducted by AI” via a pop-up or a fixed banner at the top before the conversation begins. For features such as customer data analysis or content recommendations, placing an AI icon or a help tooltip near the corresponding function button is effective. It is necessary to identify the entry points for AI features within the service and add clear disclosure statements to prevent customer confusion.

When displaying the final output, you must also consider its distribution outside the organization.

Just as important as prior notification is the need to “label” AI-generated content so that it can be clearly identified. The Framework Act on AI mandates “intuitive identification labels” for content that poses significant social risks, such as deepfakes, and requires visible or invisible (e.g., metadata) labelsfor general generative AI outputs as well.

Violations of the AI content labeling requirement may result in a fine of up to 30 million won. However, a grace period of at least one year is planned for the initial phase of the law’s implementation.

Since a company remains responsible for the content it generates until it leaves the service, technical measuresmust be put in place to ensure that any files include an identifying mark if sharing or download functions are available. For example, in the case of AI-generated marketing images, the system can be designed so that a watermark such as “AI Generated” is automatically inserted in one corner of the image upon download.

CategoryCore PrinciplesExample of execution
Advance NoticeEnsure that users clearly recognize that the system is AI-based before they interact with itPop-up before starting a chatbot conversation: “This conversation is conducted by AI.” · Icon and tooltip next to the function button: “AI-powered recommendations”
Display resultsThe AI-generated output includesvisible or invisible identifying marksAutomatically insert a watermark when downloading images: “AI Generated” · Include generation information in the text metadata

If an AI system is designated as a "high-impact AI" under the AI Framework Act, it is subject to documentation requirements.

If our service is likely to have a direct impact on human safety, fundamental rights, or significant social and economic interests, it will be classified as “high-impact AI” and subject to stricter oversight. B2B solutions in sensitive areas such as hiring screening, medical diagnosis, and credit scoring may fall into this category.

If the AI is deemed to have a high impact, it is essential to establish a risk management system, implement human oversight procedures, and ensure that the entire process is documented. Specifically, even if the AI in an HR solution recommends whether to accept or reject a candidate, a human must be involved in the final decision, and that process must be documented.

Fortunately, the government plans to grant a grace period of at least one year following the designation of high-impact AI, and it has established an “AI Framework Act Support Desk”to assist companies facing uncertainty. Through this desk, companies can receive consulting services from specialized organizations such as the Telecommunications Technology Association (TTA) and the Korea Information Society Development Institute (KISDI); therefore, it is advisable to objectively assess the risk level of your services and begin making the necessary preparations.

Artificial Intelligence Framework Act Support Desk Website

For B2B companies, the AI Framework Act could be an opportunity to build trust.

The AI Framework Act isnot merely a set of regulations to be followed, but a clear set of guidelines for creating AI services that earn customers’ trust. The transparency (notification and disclosure) and accountability (documentation) required by the Act are mechanisms that ensure customers can use our services with peace of mind, and this ultimately becomes a source of competitive advantage for the company in the long term.

Now, before the law takes effect, is the time to get your systems in order and build customer trust. If you feel overwhelmed by complex legal provisions, check out our checklist designed for immediate practical application.

📌 10 Practical Checklists for Preparing for the AI Framework Act

1. Review of Responsible Parties

Have you reviewed with the Legal and Development teams whether our service qualifies as an “end provider” that delivers value to end customers using AI technology?

2. Establish a Response System

Have you established internal procedures and designated personnel to handle legal issues or customer inquiries related to AI?

3. Check the location of the advance notice

Have you placed indicators at every point of entry where customers use AI features (such as buttons, pop-ups, and tooltips) to clearly indicate that they are AI-powered?

4. Clarification of notification text

Have you used intuitive and clear disclosure statements—such as “This conversation is conducted by AI” or “AI-powered recommendations”—to ensure customers do not misunderstand?

5. Incorporation of Terms and Policies

Have the purpose and scope of AI use, as well as the rights related to the generated content, been specifically outlined in the Terms of Service or Privacy Policy?

6. Apply visual indicators

Have you included watermarks or text such as "AI-generated" or "AI Generated" on AI-generated images, videos, and text to make them identifiable?

7. Keep the label intact when taking the item outside

Is the system designed so that identifying marks (such as watermarks and metadata) are automatically included and retained when users download or share content?

8. Implementing Invisible Markers

If it is difficult to include a visible indication in the design, have you implemented technical measures to record information regarding AI generation in the file’s metadata or similar locations?

9. High-Impact AI Diagnosis

Have we assessed whether our service falls under the category of “high-impact AI”—such as in recruitment, credit scoring, or healthcare—and, if necessary, sought advice from specialized organizations?

10. Supervision and Documentation System

If the AI is deemed to be high-impact, are you prepared to establish human oversight procedures and document both the risk management and oversight processes?

Frequently Asked Questions

Q1. When will the AI Framework Act take effect?

The AI Framework Act was promulgated in August 2024 and took full effect on February 14, 2026. However, AI systems designated as high-impact AI are granted a regulatory grace period of at least one year.

Q2. How can I tell if our company qualifies as an “AI provider”?

If your service uses AI technology to deliver value directly to end customers, it is likely to be classified as an “AI provider.” Even if you simply use an external AI API, you may still be held liable if you integrate it into your own service and provide it to customers. If you are unsure, you can seek advice from a specialized agency through the “Artificial Intelligence Framework Act Support Desk.”

Q3. Do I have to include a watermark on AI-generated content?

The law requires "measures to identify AI-generated content," which includes both visible indicators (such as watermarks) and invisible indicators (such as metadata). If incorporating a watermark is difficult for design reasons, you can also comply by recording the generation information in the file's metadata.

Q4. What is “high-impact AI,” and what additional obligations does it entail?

High-impact AI refers to AI systems that have the potential to directly affect human safety, fundamental rights, and significant social and economic interests. Typical examples include hiring screenings, credit assessments, and medical diagnoses. Once designated as high-impact AI, organizations are required to establish a risk management framework, implement human oversight procedures, and document all processes.

Q5. What penalties apply for violating the Framework Act on AI?

The current AI Framework Act is designed to focus on ensuring transparency and promoting self-regulation rather than imposing direct penalties. However, violations of disclosure or labeling requirements may result in sanctions under future amendments to relevant laws or consumer protection legislation; more importantly, such violations could lead to a loss of customer trust and damage the brand’s image.

Ahn eunjung

View More Recent Posts

Scroll to top