As artificial intelligence becomes increasingly prevalent, businesses need to prioritize compliance and risk management, rather than focusing solely on growth.
March 1, 2026 marks a significant milestone as the Artificial Intelligence (AI) Law officially comes into effect in Vietnam. The enactment of this law comes as countries around the world are racing to establish governance frameworks for a rapidly developing and widespread technology.
In Vietnam, the explosion of AI applications in finance, healthcare, and cybersecurity has created an urgent need for a legal framework that ensures safety while not stifling innovation.
From a business perspective, Mr. Truong Duc Luong, Chairman of VSEC, G-Group Technology Corporation , assessed that the Artificial Intelligence Law is not only a management tool but also an opportunity to shape the nation’s technological identity.
Mr. Luong emphasized the importance of building specialized data and a controlled testing mechanism for AI. These changes are expected to create a transparent business environment, helping domestic businesses clearly define their development roadmap.
One of the core tenets of the AI Act is its risk-based approach. Instead of applying a single regulation to all applications, the law clearly delineates areas to focus oversight on those with the potential to significantly impact human rights and national security.

Mr. Truong Duc Luong, Chairman of VSEC, G-Group Technology Corporation
Artificial intelligence and the compliance problem
From a business perspective, how do you assess the direction of the Artificial Intelligence Law, which officially came into effect on March 1, 2026, compared to regulations in other regions such as the EU or Japan?
Mr. Truong Duc Luong: Currently, not many countries have enacted regulations on artificial intelligence (AI). One of the main reasons is that AI is developing too rapidly, causing countries to be cautious to avoid creating negative impacts, or falling into a situation where regulations become outdated as soon as they are implemented.
Vietnam’s Artificial Intelligence Law was developed based on references from many countries and domestic realities. I see the law as taking a foundational approach rather than going into detailed guidelines.
This platform includes key components such as building a database of local characteristics, classifying risk levels, establishing a development support fund, a testing mechanism for new applications, and human resource training.
In particular, the regulations on support funds and pilot mechanisms are distinctive and highly relevant given our context of lagging behind other countries. However, many steps still need to be taken to concretize these regulations, and we need to wait and see the effectiveness of their practical implementation.
The new law divides AI into three risk levels: High, Medium, and Low. Will defining a “high risk” threshold based on the number of users, for example, over 500 or 1,000, make it difficult for technology companies to determine compliance levels in the initial stages?
Mr. Truong Duc Luong: In my opinion, technology companies will not have difficulty controlling the number of people who access and use their applications. However, enforcement will require close cooperation between businesses and regulatory agencies.
I propose that the monitoring process needs to be technologized to ensure the effectiveness and efficiency of the law. Another important point is the need to clearly distinguish between organizations that own AI technology and organizations that use or are affiliated with it. We need appropriate management mechanisms for each of these entities to avoid creating difficulties or hindering the operations of businesses and organizations.
As a cybersecurity company, under review of the new law, what risk level are VSEC’s products currently classified at, and what compliance preparations are being implemented?
Mr. Truong Duc Luong: The defining characteristic of the cybersecurity monitoring solutions that VSEC applies to its clients is the identification and monitoring of technical aspects.
We focus on cloud applications and infrastructure that do not utilize human data, so these products are classified as low-risk. However, our engineering team still communicates directly with customers regularly to identify any exceptional risks that may arise.
At the same time, VSEC’s AI is also equipped with features that allow administrators to monitor the system itself. This is a convenient approach that I believe other businesses can adopt immediately.
The law also requires that AI always be subject to human supervision and intervention. How did VSEC address this challenge, ensuring that its security system both automatically and quickly prevents cyberattacks while maintaining human control?
Mr. Truong Duc Luong: In reality, within VSEC’s business, AI is playing the role of an assistant, providing information to experts for more processing tasks.
The system is not yet capable of making completely independent decisions, such as automatically blocking a cyberattack without human confirmation. This stems from our caution when providing services in a real-world environment. At the same time, it also comes from an objective assessment of the maturity level of AI technology in each specific cybersecurity task.

The Chairman of VSEC believes that the Artificial Intelligence Law is an opportunity for domestic businesses to achieve technological self-reliance.
Towards technological self-reliance for businesses
Some argue that complying with new requirements such as AI labeling, risk assessment, and activity logging will inevitably increase operating costs. In your opinion, how will this cost burden impact AI startups and their competitiveness in the market in the future?
Mr. Truong Duc Luong: Every time a new law is enacted, businesses need to comply, and this often increases operating costs.
However, with the Artificial Intelligence Law, I see this as an opportunity for businesses to become technologically independent. In fact, the regulatory body has opened up growth opportunities for innovative businesses by establishing a testing mechanism (sandbox) for AI applications.
I believe that, with this mechanism, businesses will be able to reduce stringent compliance requirements in the initial stages, allowing them to focus their resources on developing and refining new ideas.
Some businesses are concerned that the licensing and approval process for “high-risk AI” will slow down time to market. What suggestions do you have to prevent technology companies from losing their competitive edge in terms of business speed?
Mr. Truong Duc Luong: Based on the contents of the Law, I propose that there should be an automatic licensing mechanism for applications and technological solutions that have matured from testing groups or safety testing programs.
This helps balance two factors: on the one hand, the application has been proven effective and impactful on users over time; on the other hand, when officially entering the growth phase, the business will not be hampered by too many inspection procedures that hinder business speed.
Essentially, AI requires a massive amount of data for training, but the law also mandates strict protection of users’ data rights. How can tech companies obtain sufficient quality data to develop AI without violating data privacy regulations?
Mr. Truong Duc Luong: I believe that the perspective on the need for massive data also needs to be reconsidered. Currently, many AI applications can reuse existing AI technologies, so the need for new data may not be as great as before.
However, the requirement to protect users’ data rights must still be strictly adhered to. In my opinion, this should be approached through technical solutions.
Specifically, there is a need for technologies that automatically detect and remove invalid or illegal user data. In addition to the Artificial Intelligence Law, businesses and organizations also need to clearly disclose and collect only sufficient information in accordance with the requirements of the Personal Data Protection Law.
Finally, now that the legal framework is clear, how does he predict the landscape of AI application in Vietnam will change in the next 3-5 years? And how does VSEC position itself in this wave of “AI transformation”?
Mr. Truong Duc Luong: VSEC was once a pioneer in professional cybersecurity training and mastering specialized technologies. Going forward, we will continue to be pioneers in applying AI to provide the most effective vulnerability detection and cyberattack monitoring services.
This is within the context of the significant transformation of AI in Vietnam in the coming years. The main trend will be to create real value and tangible benefits for businesses and individual users in many essential sectors such as finance, education, and private healthcare.
