Blog Post #2 March 24, 2023

Written by Gerardo Valentino Gorospe IV



Today, technological advances have led to increasing levels of automation by machines on our behalf. Humanity’s reliance on robotics has put AI at the forefront of modern military development. Thus, the future development of a legal framework surrounding the use of Autonomous Weapons Systems (“AWS”) is one of the greatest questions for future generations.  The reality of AWS is a foreseeable and unavoidable threat to humanity. Proper discourse about development and application is paramount for International Humanitarian Law (“IHL”) scholars and decision-makers of the coming generations. “AWS” refers to a robotic weapon “that can select and engage targets without further intervention by a human operator. The line between semi-autonomous weapons and AWS is often blurred. Semi-autonomous weapons supplement a human operator’s decision, and have been in use for the past two decades. A true AWS – one with the capability to make its own decision to kill a target without any human intervention – has yet to be seen.

Why AWS?  – The Inevitability of Killer Robots

Although we still have no indication of when a “true” AWS will be created, its creation is almost certainly inevitable. First, an AWS is desirable to developed countries’ militaries. Second, developments in the AI technology necessary for AWS are valuable for humanity. AI development is being driven by the civilian market for countless applications, such as self-driving cars and facial recognition. Given the commercial value of such technology, it is impossible to contain the development of AWS technology within the military realm. By advocating for a ban on AWS, critics avoid the establishment of proper legal framework for AWS to operate within. This leaves the legal landscape ambiguous and risks tragedies and violations of international law. To avoid this, it behooves contemporary scholars to discuss set legal parameters on AWS prior to their arrival on a battlefield. 

The “Pros and Cons” of AWS

Most military experts contend that “truly outlawing autonomous weapons is not a likely outcome.”  Further, “there are currently no [binding] international treaties that prohibit or regulate the development and deployment of [AWS]...” AWS would greatly augment the capabilities of a nation’s armed forces while eliminating hurdles. AWS can detect and process complex information far faster than any human can, making them tactically flexible and effective. They would greatly reduce the human cost of warfare and such weapons are not driven by human emotion. However, by reducing the human cost of warfare, the availability of AWS may lower the threshold for initiating armed conflicts and lead to undesired chain reactions of hostilities. Further, the mere existence of such powerful military assets would necessitate a global “arms race.” The desire to remain ahead in AWS development may lead to the deployment of unsafe AI systems. Lastly, the lack of human conscience that is espoused by supporters of AWS is a double-edged sword. The decision-making process of AWS is run by algorithms, and thus, no ethical decision is made. Much of the current IHL framework is written within the wider context of the complex human understanding of ethics. By removing the human aspect of warfare, conflicts may easily devolve into the senseless violence that IHL sets out to prevent.  

The Future of “Autonomous Weapons Systems” – AWS and IHL

Additional Protocol I, Article 36 to the Geneva Conventions places an obligation on state parties, “in the study, development, acquisition, or adoption of a new weapon. . . to determine whether its employment would, in some or all circumstances, be prohibited by the protocol or any other rule of international law.”  The complex nature of AWS inherently struggles to comply with IHL principles, such as the principles of distinction and proportionality. 

  1. The Principle of Distinction

Under Additional Protocol I, Article 48, the principle of distinction is founded on the idea that civilians must be protected and distinguished from non-combatants.  The improbability of AWS’s ability to comply with the principle of distinction is its greatest weakness. AWS’s issues with distinction can be categorized in three ways: the “weak machine perception” problem, the “frame problem,” and the “weak software” problem.

The “Weak Machine Perception” Problem

Distinction requires human combatants to be able to collect information and make decisions about a potential target. Whether a human target is viable is judged on factors such as their immediate threat, the context of their identification and a healthy dose of intuition. The argument that an AWS can be programmed to replicate the highly complex decision required by the principle of distinction is dubious. Further consideration of this problem begs the question: aren’t humans just as imperfect as machines? Human emotion can be lifesaving and destructive. The principle of distinction calls upon combatants to rely upon their human instinct and AWS can never truly match human intuition. In combat humans are allowed to lapse in judgement,  whereas AWS theoretically should not. Ultimately, human error is ethically preferable to machine error.

The “Frame” Problem 

The “frame” problem refers to the challenge of limiting the scope of reasoning that is required to derive the consequences of a particular action. The frame problem means that designing an AI complex enough to adhere to the principle of distinction will be slow and ineffective in a combat scenario. However, programming “relevant” and “irrelevant” variables would make an AWS militarily effective, but greatly increases the risk of incorrect and indiscriminate attacks. 

This problem is heightened by Additional Protocol I, Article 50(1), which requires combatants to defer to persons as civilians in instances where there is “doubt” as to their civilian status.AWS could theoretically compute doubt by calculating the likelihood of being a lawful target, but this highlights the severity of the frame problem. An ethical AWS will be expected to collect all the relevant information to adhere to IHL principles from a complex environment. AWS will also have to be powerful enough to properly assess possible scenarios and outcomes which would, theoretically, take an infinite amount of time. Conversely, the frame problem exists within our current understanding of AI capabilities. AI, if achieved, would see advancement overtime, possibly overcoming the frame problem.

The “Weak Software” Problem

The “weak software” problem involves issues of unpredictability. If an AWS is deployed in the field, how can a human overseer determine if AWS’s decision was based on a software error or a high-level solution only seen by the AI? The most obvious solution to this problem is designing AWS with postoperative data in mind. However, when interpreting the data for software errors, a human’s comprehension may fail to grasp the reasoning of powerful AI. If AWS is too complex, there is no reliable way to determine whether mistakes were caused by the human or the machine.

  1. The Principle of Proportionality

Under Additional Protocol I, Article 51, the principle of proportionality prohibits an attack if the incidental harm to civilians is excessive in relation to the concrete and direct military advantage anticipated by the attack. The International Criminal Tribunal for Yugoslavia notes that criminal accountability for disproportionate attacks uses an inherently human “reasonable person” standard: “whether a reasonably well-informed person in the circumstances. . . making reasonable use of the information available to him or her, could have expected excessive civilian casualties to result from the attack.” Assessment of proportionality centers around a subjective “apples-to-oranges” comparison between civilian lives and military necessity. The “reasonableness” requirement of proportionality, therefore, poses a large problem for AWS’s adherence to IHL. A proper replication of the “reasonableness” standard would require an impossibly advanced AI. Proportionality assessment would require an AWS to analyze all immediate and possible outcomes of all possible scenarios and balance all possible civilian casualties with all possible short-term/long-term military advantages. The balancing of complex interests required by proportionality analysis is too advanced for the weak machine perceptions of an AWS. 

  1. Accountability for IHL Violations by AWS

The third biggest issue for establishing a legal framework for AWS is accountability.  Some say that the “boots-on-the-ground” operators of AWS should shoulder the liability for IHL violations. Unless an individual knowingly or recklessly deploys an AWS in an unethical way, it is safe to assume that most AWS operators will lack the proper understanding for culpability to stick. A suggestion would be to shift the blame upon the AWS developers like a product liability regime. By shifting accountability to engineers, supporters of this policy hope to preempt IHL violations by ensuring AWS is designed with the highest ethical standards. However, a product liability framework would mean that “even those suspected of the most heinous war crimes would not face criminal prosecution. . . but would be confined to a lawsuit and potential monetary fine.”

Thus, the most logical answer is to place the lion’s share of liability on military commanders. The doctrine of command responsibility holds military commanders responsible for the commission of war crimes carried out under their command when they either knew or reasonably should have known that their forces were committing or about to commit such crimes. Placing criminal liability on military commanders simplifies questions of responsibility for violations of international law by AWS and reduces the likelihood that unsafe AWS will be deployed. However, command responsibility doctrine deals with the prevention of crimes. One could argue that AWS, lacking the requisite mens rea to ever be criminally liable, cannot commit a crime that can truly be prevented. AWS is designed to operate independently, meaning that a commander may not always have reason or knowledge to anticipate a specific criminal action.

Possible Regulations of Autonomous Weapons Systems

  1. Meaningful Human Control

 Meaningful human control refers to human supervision and the option to deactivate. Putting accountability, and thus moral agency, back into the hands of humans has seemingly assuaged the most ardent opponents of AWS. However, for some, meaningful human control might mean setting minimal standards on AWS programming. For others, they may seek to implement extremely high standards on AWS to limit overall usage. Setting AWS standards too high defeats the purpose of AWS. Further, meaningful human control requires human operators to be able to understand and act on the decision-making processes of the machine. But if an AWS is too complex for  human comprehension, how can we argue that meaningful control is maintained? Lastly, if there is human control, there is not full autonomy. The requirement of meaningful human control demands that AWS should only be utilized as semi-autonomous weapons.

  1. International Openness and Transparency

 Transparency on the status of AWS developments across the globe will lead to higher standards in programming and safety.  States can begin by first settling upon an agreed-upon definition of AWS and could share the ethical standards and safety measures being programmed within their AWS. 

  1. Limited Application 

An effective way to remove ethical barriers to AWS deployment is not to deploy them in high-risk environments, and limit their application to theaters where the risk to non-military targets is minimal. This would solve the weak machine perceptions, frame, and weak software problem, allowing programmers to develop efficient AWS while minimizing the need to develop highly complex algorithms capable of complying with principles of distinction or proportionality. However, it is unknown if global militaries will agree to limit the imagined application of AWS.

  1. Machine Learning and AI Ethics

One proposition to resolve the proportionality dilemma would be to lean further into AI algorithms and data analytics.  It’s theoretically possible for a machine to learn “proportionality” by analyzing large data sets of acceptable and unacceptable human military decisions. But this argument overlooks the limitations of machine learning programs, as studies have repeatedly demonstrated that the data-driven nature of these technologies leave them vulnerable to erroneous results and possible bias. 

The problems of machine biases are increased by the complex requirements of ethical warfare and compounded by conflicting state interests.  The same military decisions deemed “proportional” by Russia’s military could be considered “disproportionate” by Ukrainian armed forces, and vice-versa. Transparency and data sharing between AWS programmers regardless of national or military allegiance would be the only way to avoid biases in input and output. But states are often reluctant to share military technologies with rival powers, and likely to disagree on what calculations are most important for determining machine learning outputs. A possible rebuttal to these objections would be to limit the programming of AI ethics to remain exclusively within internationally recognized rules of engagement and IHL principles. 

  1. The “Wait-and-See” Approach

One final suggestion would be the implementation of a more passive “wait-and-see” approach to the regulation of AWS. The development of ethical norms and social uses will evolve gradually along with their introduction. This suggestion comes dangerously close to complete inaction but is not wholly without merit.  The “wait-and-see” approach does not disregard regulation, but calls for a more laissez-fair approach to ethical discussions of AWS. The reason why I include this solution as the final one is because it accepts the reality of our situation: we do not know what the future has in store.

About the author:

Gerardo Valentino Gorospe IV is a 3L at UCLA School of Law specializing in International and Comparative Law.

Blog Post #1 Nov. 26, 2022

Written by Abhishek Ranjan & Kanishka Pamecha


The long-standing practice of “virginity test” as part of the recruitment process for female cadets in Indonesia’s armed forces has finally come to an end. Thousands of female applicants have been subjected to virginity tests since 1965, despite the National Police principles that recruitment must be “non-discriminatory” and “humane.” The tests were done to determine whether or not they are sexually active and hence whether or not they are “moral” and “worthy of the office.” Human rights organizations have long considered the process a continuation of the invasive and discredited so-called virginity test, which is progressively being phased out in many areas.


A virginity test is a vaginal or hymen examination to examine whether a woman has engaged in sexual intercourse. It is also known as the “two-finger test, where the doctor inserts two fingers inside a woman’s vagina to check the state and laxity of the hymen. However, the state of the hymen hardly answers this question. The size of a hymen may vary for many reasons unrelated to sex.


This test was once regarded as a sign of sexual activity, but health organizations have disproved this view. They advocate that this practice has no scientific basis and violates human rights. There is no recognized test that can establish a person’s vaginal intercourse history. The World Health Organization also recognizes it as an unscientific practice. The practice is “degrading, discriminatory, and traumatic.” The only scientific consequence of this test is the negative impact on the physical and psychological well-being of the women subjected to this test.

This primitive and humiliating practice does not measure female candidates’ and prospective fiances’ physical and mental health. Women who intend to join the army should be evaluated only based on their ability to follow basic military training, just like their male colleagues. The recruitment process for male and female candidates must be equal. And the women intending to marry military soldiers should not be forced to undergo such a humiliating procedure that has no scientific or medical merit.


Forced virginity tests are unpleasant, painful, and inhumane, intended to degrade and marginalize women. It is an affront to a woman’s dignity and reputation and a violation of her right to bodily privacy. The World Health Organization recommended abolishing such testing in 2018, calling the test a “violation of the human rights of girls and women .”Since the test constitutes an attack on women’s “honor and reputation,” it breaches Article 12 of the Universal Declaration of Human Rights (UDHR). Similarly, Article 17 of the “International Covenant on Civil and Political Rights” (ICCPR) recognizes the individual’s right to privacy in the same way as the UDHR does. It reveals a lot about women’s societal attitudes and constraints if they don’t want to harm their reputations. The test is also a breach of Indonesia’s ratification of the “Convention against Torture and Other Cruel, Inhuman or Degrading Treatment or Punishment .”The test also violates Article 5 of the Universal Declaration of Human Rights and Article 7 of the International Covenant on Civil and Political Rights, which provides that no one shall be subjected to torture or cruel, inhumane, or degrading treatment or punishment. 

Article 2 of the UDHR grants freedom from “discrimination of any kind, such as race, color, sex, language, religion, political or other opinion, national or social origin, property, birth or other status .”In addition, the United Nations General Assembly adopted the “Convention on the Elimination of All Forms of Discrimination Against Women” (CEDAW) in 1979. Discrimination against women is prohibited by the CEDAW and other human rights treaties. Since men are not subjected to virginity testing, the practice is discriminatory against women because it has the effect or purpose of denying women the opportunity to work as police officers on an equal footing with men. Indonesia is a signatory to the Convention and is bound by international law to refrain from discriminating against women. On the other hand, the tests are a form of political and cultural discrimination against women. To summarize, the military’s virginity tests on women’s bodies are flagrant human rights abuses. Both domestic and international rules are broken by the test’s cruel and degrading nature.

These tests have lasted for decades, revealing the Indonesian armed force’s tone-deaf hiring policy and deep-rooted value of ignorance of human rights, dignity, social justice, and non-discrimination principles. Worse, none of the military officers have ever faced administrative or legal consequences for their actions. Organizational accountability and legal accountability are the most apparent solution, as respect for women’s dignity is one of the TNI code of ethics values. According to Article 2(d) of TNI Law No. 34/2004, military soldiers accept political policies that comply with the ideals of democracy, civil supremacy, human rights, and provisions of national and international laws that Indonesia has ratified. This compliance leads us to the following paradox. Although Indonesia had ratified various international conventions and treaties that promote equal rights for women and respect for their dignity, such as the “Convention on the Elimination of All Forms of Discrimination Against Women,” the TNI continued the practice for so long.


Women comprise only 10% of the country’s 450,000 military troops. Abolition of this practice could mean more opportunities for women to join the military. Virginity testing is a global issue that raises concerns about female physical autonomy, sexual health, and women’s rights. Since the dawn of time, societies have looked for a physical sign to determine virginity; however, this is now acknowledged to be scientifically impossible. Although most countries have abolished the “archaic” and “unscientific” practice of virginity tests, it is still practiced in at least 20 countries. Lately, illicit virginity testing has been discovered in prisons and detention centers worldwide, most notably in Egypt, India, Iran, and Afghanistan. Eliminating the dangerous practice will necessitate a concerted effort from all sectors of society, particularly the public health community, health systems, and health professionals. Virginity testing has no scientific basis and cannot establish past vaginal penetration. Health practitioners and their professional bodies should be aware of this. They should also be mindful of the health and human rights implications of virginity testing and should never perform or encourage it. Communities and other relevant stakeholders should conduct public awareness efforts to debunk virginity myths. 

The elimination of virginity tests as part of the recruitment process for female cadets in Indonesia’s armed forces is undoubtedly a positive step in the right direction, but it only touches the tip of the iceberg. There’s a lot more still to be done. Only by combating harmful views about female “purity,” a concept that overlaps significant social, cultural, and religious beliefs, can long-term reform be achieved.

About the Authors: 

Abhishek Ranjan is currently a fourth-year law student pursuing B.A. LL.B. (Hons.) at Dr. RML National Law University, Lucknow. His academic interests particularly lie in Human Rights Law, Public Policy, Diplomacy, and International Law.

Kanishka Pamecha is currently a fourth-year law student pursuing B.A. LL.B. (Hons.) at Dr. RML National Law University, Lucknow. Her academic interests particularly lie in Women’s Rights, Labour Law, and Environmental Law.

%d bloggers like this: