Due to the sudden popularity of AI systems such as ChatGPT, many individuals view AI through the lens of user-generated inputs and commands. But AI applications have been employed by businesses for years—and we continually experience their effects in our everyday lives (e.g., Google Cloud or Microsoft Azure). This has all led to a dramatic improvement in the quality of our most important technologies.
However, with the recent explosion in the usage of artificial intelligence, privacy experts have grown concerned that AI technologies will stress legal privacy frameworks that did not exist prior to their invention. That means that, where lawmakers have not explicitly accounted for artificial intelligence, there is the opportunity for gaps in privacy protection. As a result, six new state AI-related laws have recently gone into effect that allow users to opt-out of profiling by AI. The state of New York, for example, forces employers to notify job candidates whenever they use artificial intelligence in their hiring processes; it also requires an annual audit to evaluate any AI used for bias. More privacy laws have been passed that impact the use of AI by companies but do not directly address their regulation.
Members of Congress are also beginning to take action, such as Senator Chuck Schumer who has proposed a bipartisan AI framework that seeks to balance innovation with individual privacy concerns. However, the issue with many of these approaches (New York’s law being a notable exception) is that they are not explicitly designed toward future outcomes. This has many to wonder, with the continued rise of AI, as to whether user data can be protected without a federal law unifying the progress individual states have already achieved.
Luckily, there may be a way to avoid introducing sweeping legislation to govern AI. There is reason to believe that some existing data privacy laws may be enough to cover users working with AI. This is especially true in instances where the personal data in question is particularly sensitive (e.g., health, finances, the data of minors) or if the use of AI comes into conflict with preexisting legal structures (such as those around hiring). In such cases, data privacy laws may already impose significant restraints. This is important because it means that, as we wait to see which way the national wind blows for AI legislation, there is a good chance that consumers are already protected.
Legal Matters