So Sue Me

Patents are a legal protection of intellectual property. It’s basically like calling “dibs” on an original product idea. In theory, I think patents are a great idea; the inventor of a new product should be the one to reap the rewards resulting from that product. However, in practice, I think patent law is a flawed system.

There definitely are many pros to the patent system. The patent system can accelerate innovation, as it provides inventors, engineers, and scientists incentive to perform cutting-edge innovation in order to be the first to make a breakthrough that can be patented. Owning a patent for your work is a pretty good motivation.

However, there are also many cons. The first to invent a product may not be the one to get the patent; there may be someone who is richer, with more resources, and with a better knowledge of the patent system who is able to complete the patent process first. In addition, as with any legal system, there are plenty of ways to exploit the system, as evidenced by patent trolls.

Patent trolls are companies that do not produce a product; they merely collect patents and sue other companies for patent infringement. Reading about the various cases won by patent trolls against tech companies made me angry. It would be one matter if these trolls were actually producing these patented products, but they’re merely holding on to these technologically innovative ideas and letting them rot away, and attacking any other company that tries to bring these ideas into fruition. VirnetX may technically hold the patent for the technology behind iMessage, but if they’re never intending to utilize that technology, they have no right to block others from developing the same technology. That’s absolutely ridiculous. To me, VirnetX and other patent trolls are earning revenue by holding technology hostage.

A huge part of this problem is the fact that patents have expanded into the realm of software, allowing patents to cover more abstract things. I don’t think it’s necessarily a bad idea for patents to cover software, as software is intellectual property too, but perhaps patents need to be restricted to software products, not merely ideas or processes. A patent of a computerized workflow to distribute work order assignments is so general and vague, it seems completely underserving of a patent. Yet, a patent troll was still able to sue a flower delivery service over this. I think if a patent is held for a certain, reasonable amount of time without any effort to produce a marketable product, then the patent should expire. This will allow innovators with genuine desire to advance technology to protect their work, but also prevent patent trolls from holding back the progress of technology.

All Hail Our Robot Overlords

Automation is not inherently unethical. In fact, I would go on to say that automation–and advancement of technology in general–is a good thing, and helps drive the progress of society and humanity. However, as with all major technological and economic change, the implications and negative effects must also be considered and accounted for.

Automation has many benefits. With increased automation, companies can create output more efficiently, benefiting the global economy. Increased automation can also create new markets that in turn create new jobs.[1] Automation can also make the world safer, as well. For example, one of the arguments in favor for automation like self-driving cars is that with the use of AI automation, cars can sync up with each other to eliminate traffic and make smarter driving decisions that will lead to less accidents on the roads. However, with all these benefits, there are of course downsides as well.

The primary concern for many people is the loss of jobs that increased automation will cause.[2] This is definitely a valid concern; already, self-check-out kiosks are becoming more available in stores, robotic voices are becoming more common customer service agents, and self-driving cars are hitting the roads. While some see this as a benefit, claiming that people will have more time to pursue more meaningful activities if menial jobs are replaced by automation, I think that is a naive view of the matter. Even if such an idea such as a Universal Basic Income were to be implemented to make up for lost income from these menial jobs, I don’t think that alone is a viable solution.

While the primary purpose of work for an individual is to earn an income, work also helps give meaning and purpose to someone’s life. “The paradox of work is that many people hate their jobs, but they are considerably more miserable doing nothing.”[3] I know that I personally feel restless and stressed when I don’t have anything to do–while my opinion might be a skewed since I love what I study and am excited about the work I will be doing in my future career, I feel like the general sentiment of feeling productive through work holds true for most people. Therefore, even with such a thing as Universal Basic Income, steps still need to be taken to find ways to create new jobs to replace those that will be lost through automation.

Another concern for people about increased automation is the amount of trust required to integrate automation into our every day lives. Even if diagnostic algorithms or self-driving cars are proven to be better at decision-making than humans, most people would rather trust humans rather than emotion-less machines. While very capable of making mistakes, humans are more transparent. If we make mistakes, we can usually understand why and how that mistake happened. On the other hand, AI algorithms such as neural networks may be near-perfect, but the inner workings of a neural network are like a black box. No matter how well the neural network performs, it’s still harder to trust something when you can’t understand its reasoning. Logically, I’m comfortable with AI making decisions for me as I realize by the time it reaches public use, it will be performing at a much better level than humans. However, I still do have a less-rational discomfort lingering over the idea.

Ultimately, I do feel like increased automation is a good thing, that may include some negative side effects. It is not unethical to pursue a future with increased automation, but that includes the responsibility of accounting for and coming up with solutions for all the downsides of this.

I Have No Mouth, and I Must Scream

“Artificial intelligence” is mostly a buzz word, a term that elicits much hype in popular culture, but in reality is not quite as mysterious and mind-blowing in its current form. Artificial intelligence is the field of computer science that aims to build computers that can do things that humans can do, at the same level or better. When most people think of artificial intelligence, they think of humanoid robots that can pass as humans, or the malicious human-enslaving machines from The Matrix–they think of machines that can completely mimic or surpass the mind of humans in every aspect. This kind of artificial intelligence is called “artificial general intelligence,” or AGI. While super cool, this field still has a ways to go before we have to worry about robots taking over the world. Much of AI instead focuses on narrow artificial intelligence, or creating computers that can imitate human behavior on certain tasks. AlphaGo, a program build for the purpose of playing the game of Go, is one such example. The field can also be split between strong AI and weak AI, as well. Strong AI strives to actually imitate how a human’s mind works, so that the machine can think as well as explain how humans think. Such a model has not yet been built. Weak AI, on the other hand, merely strives to imitate human behavior. The underlying mechanics of how the system works may be entirely different than how the human brain does; just the end result matters.

I do think that AlphaGo, Deep Blue, and Watson are proof of the viability of AI. While they’re only good at one specific thing, they’re really good at that one specific thing. As long as it’s understood that AI isn’t all about making robots that can pass as humans, then it’s clear that the progress that AI has already made is fascinating in its own right. Programs like AlphaGo have gone beyond just encoding a large knowledge base combined with search algorithms–AlphaGo utilizes a neural network to pretty much replicate human intuition, and I think that’s pretty freaking awesome.

In my Artificial Intelligence class that I am currently in, we’ve discussed the Turing Test and the Chinese Room counterargument. If we are measuring the machine’s intelligence based on how well they can think, then I do not believe the Turing Test measures that at all. The Chinese Room counterargument against the Turing Test claims that even though a machine can produce output that can pass the Turing Test, the machine itself is still not thinking. After we discussed this topic in my AI class, I was firmly on the side of Searle and his Chinese Room argument. However, after reading the articles for this class, I also see why the question of whether a machine is actually thinking is “too meaningless to deserve discussion.” If a machine can behave in a way such as to mimic human behavior so well that it can fool a human into believing it is also human, then does it really matter whether the machine is truly capable of thought? In the end, I suppose that to say whether the Turing Test is or is not a valid measure of intelligence depends on what your definition of intelligence is.

I do think concerns over the growth of AI are warranted, as are concerns over any new technology. I, personally, am all for the development of technology related to AI as I think it has incredible implications and benefits for the world. However, whenever new technology is created, less desirable effects must also be considered. The most pressing concern seems to me to be the loss of jobs for humans, as AI machines become more and more capable of doing work. Why would companies pay for human labor if they could have machines do it for a fraction of the cost? Another issue that I can see arising as the result of AI is the question of who assumes responsibility if things go wrong. If a self-driving car crashes and injures humans, is the owner or the manufacturer at fault? Or can anyone even be held liable? These are things that need to be discussed and resolved before AI technology can be widely made available to the public.

There are other deeper ethical questions that are more philosophical in nature as well. Is it possible for an AI system to ever completely mimic the human brain? If so, then is the human brain just a computer? If machines can ever develop the ability to think at the same level as humans, is there anything that fundamentally separates them from us? I think this question has interesting theological implications; as a Catholic computer science major, I find this to be an incredibly fascinating question. On one hand, my religion teaches that humans do have a fundamental difference from these hypothetical AI systems–we have a soul, while the machines do not. On the other hand, if an AI machine were to be developed that perfectly encoded every single neuron, synapse, and electric signal of a human brain, how could there be any fundamental difference? I don’t know the answer–how could anyone?–but these are the kinds of questions that fascinate me regarding artificial intelligence.

Privacy Paradox

Listen to our podcast here!

After having gone through the challenges, have you decided to make any changes in your technology habits? If so, what are they?

I have not and I do not really plan on making any changes to my technology habits. As a personal preference, I not really concerned about my privacy. According to the Privacy Personality Quiz, I am a realist about my online privacy. I see my loss of privacy as a necessary consequence of the convenience of improved technology, and while I do take steps to keep some of my information private, it’s not one of my priorities. I think concerns about privacy are definitely valid concerns; they just aren’t for me.

In choosing between your personal privacy and technological convenience, which side do you choose? Is this an easy choice or a touch decision (or it doesn’t really matter to you)?

This is an easy decision for me–I love the conveniences that technology provides and am not very concerned about my personal privacy. I found University of Cambridge’s tool, Apply Magic Sauce, that guesses your personality traits based off your Facebook likes, to be interesting and entertaining, rather than alarming. Of course, I’m not willing to give up all control of my privacy in exchange for greater convenience, but I am not concerned with the privacy practices in place currently.

In addition, many problems can be solved through the analysis of large amounts of data. For this to occur, people may have to give up a small amount of privacy. This isn’t a matter of convenience; actual important problems can be solved by this. While we may someday reach a point where reducing personal privacy harms more than helps, I don’t believe we have yet reached that point.

Regardless of what you believe about your personal privacy, what do you think about privacy in general? Is privacy something worth fighting for or protecting? Or is it a relic of a by-gone era?

I do think privacy is something that’s worth fighting for. As I mentioned above, while I don’t believe privacy is currently a significant issue, I do believe it could become one in the near future, and that’s why it’s important for people now to vocalize what is and is not okay in regards to personal privacy.

I found it an interesting perspective in the Privacy Paradox podcasts when they said that we as individuals have an obligation to protect our own privacy, so it does not look odd for more vulnerable members of society to also protect their privacy. While I don’t really think twice about my personal privacy, I do feel responsible if my privacy actions affect others in society. This is the reason that I am thinking twice about some of my ideas about personal privacy.

[C3n50r3d]

I have mixed feelings about online censorship. In general, I am huge proponent of intellectual diversity and being educated on opposing viewpoints. I believe it is impossible to hold a well-informed opinion if you are not well-informed about the opposing viewpoint; you cannot learn to think for yourself and form your own opinions if you are never exposed to different opinions.

At the same time, it is naive to think that everyone who puts their opinions out into the public do so earnestly, out of a genuine desire to articulate their beliefs. People who spread hate speech do so out of maliciousness; their intent is to cause distress and sow discord. Hate speech does not promote intellectual diversity; instead of promoting growth and constructive discussion, it builds walls and tears down people.

However, although I have mixed feelings on the subject of online censorship in general, I have more well-defined positions on more specific questions relating to online censorship.

Is it ethical for companies to remove dissenting opinions for governments? Is it ethical for companies to remove information that does not promote or share their interests or political beliefs?

To me, these two questions are asking the same thing–is it ethical for companies to remove non-harmful content if they or someone else disagrees with it?  While I disagree with this censorship of different beliefs and believe that it stunts the development of society, I would not go so far as to say it is unethical, depending on the purpose and mission of the company. For news organizations, their mission is to inform the public about current events. If a staff writer pens an article with a different ideology than that of the organization, but still only reporting true facts, I do think it would unethical to fire that writer.

However, there are many other companies, including social media sites, that are not news organizations and do not proclaim their mission to inform the public. If a pro-life Christian forum removed a post for advocating the benefits of contraceptives or abortion, I don’t believe many would call the forum moderators unethical, whether or not they agree with their beliefs. Likewise, if a feminist forum removed a post that denied the existence of rape culture, it is hard to see how anyone could claim that is unethical. In neither of those cases did the forums claim to be unbiased–or even if they don’t explicitly state that fact, it’s still pretty clear that they are not.

This brings me to my main point–it is not unethical for companies to remove opinions that they, or someone else like the government, disagree with, but it is unethical for companies to assert themselves as an unbiased source, and then go on to censor dissenting viewpoints. While Google does censor search results in China, in order to be allowed to provide services in that country, they do not pretend otherwise. If Facebook is censoring user posts, that in and of itself is not unethical. However, as Reem Suleiman said, “if Facebook is making decisions about how news reaches the public, then it needs to be transparent about how those decisions are made.” Facebook cannot pretend to be “a place where people feel free to express themselves within reason” while censoring people’s expression.

Is it ethical for companies to remove information broadcasted by terrorism organizations, or about terrorism organizations?

I believe it is ethical for companies to remove information broadcasted by terrorism organizations. If the information is broadcast by terrorists, then they want the public to hear that message and to spread it is to aid them. However, I won’t go quite as far as to say that it is unethical for companies to not remove information broadcast by terrorism organizations, as this goes into the territory of hearing opposing viewpoints in order to better understand why you hold your own beliefs.

However, information about terrorism organizations is more of a grey area, and I think is a question that can be better answered by individual companies themselves, according to their company mission. Facebook firmly states “There is no place for terrorists on Facebook,” while Twitter is slightly more lax, in order avoid censorship. I don’t think one option is more ethical than the other, given all the complexities and nuances of freedom of speech and freedom of information.

Is it ethical for companies to remove discriminatory, provocative, hateful content generated by its users?

There is no question for me on this; it is always ethical for companies to remove hate speech generated by their users. As I already mentioned above, hate speech is generated not by a desire to create open discussion and the sharing of beliefs, but out of malicious intent and a desire to cause distress.

That being said, actually being able to discriminate hate speech from unsavory opinion is a difficult task. Discriminatory or provocative content might not necessarily be hate speech; perhaps the person proclaiming these opinions is truly ignorant. Removing this content if it is not clearly hate speech may eliminate an opportunity to educate this person, and lead to the creation of echo chambers. The person whose content is removed will instead move to other sites filled with people sharing their same opinion, and the site that removed the content will also become more homogenous in their content.

Content needs to be reviewed carefully and thoughtfully to determine whether it is hate speech. Tech companies are doing their best to practice this, but this is definitely a difficult task.

A Corporation’s a Person No Matter How Large?

What is Corporate Personhood?

Corporate Personhood is the proposal that corporations should be afforded the same constitutional rights as individual people. The rights that we enjoy every day, such as our freedom of speech, are rights that also belong to corporations, according to the philosophy of Corporate Personhood. However, in practice, this term seems a bit misleading to me. As acknowledged by several articles in the readings, corporations are not granted all the same rights as individuals–corporations cannot invoke the Fifth Amendment protections against self-incrimination, for example. In fact, it seems that the courts have been able to pick and choose which constitutional rights apply to corporations, as evidenced by the quote below.

“Corporations have no Fifth Amendment privilege against self-incrimination. On the other hand, the courts have recognized or have assumed that corporations have a First Amendment right to free speech; a Fourth Amendment protection against unreasonable searches and seizures; a Fifth Amendment right to due process and protection against double jeopardy; Sixth Amendment rights to counsel, jury trial, speedy trial, and to confront accusers, and to subpoena witnesses; and Eighth Amendment protection against excessive fines.”

This practice creates an inconsistent definition of Corporate Personhood. How can certain rights be afforded to corporations with the argument that they should have the same rights as people, yet other rights be denied with the argument that corporations are not, in fact, people? If corporations are truly to be considered the same as individual people, then they should also bear the same moral obligations as individuals, as well as face the same consequences as individuals if they break the law.

Corporations and the Muslim Registry

I completely believe that tech workers and tech companies are right in pledging to not work on an immigration database or a Muslim Registry. I believe that such a database amounts to blatant discrimination and has the potential to cause massive amounts of harm to vulnerable groups of people–the previous NSEERS program already stirred fear and concern when it was enacted. To comply and participate in building or maintaining such a database is morally wrong, and corporations have a responsibility to make decisions that are not unethical.

As far as making business decisions based on morality, however, I do not believe corporations have a responsibility to make decisions that are ethical. Morality and ethics are not black and white, and many decisions can be made that are neither ethical nor unethical. As a side note, earlier this semester, we discussed the business endeavors of Emerson Spartz, who created an enterprise revolving around creating “click-bait,” and it was suggested that his business should instead be focused on creating a product that can do good in the world. I would argue that there is nothing wrong with his business not creating good, as long as it is not creating evil.

Coming back to the topic of the Muslim Registry, I don’t see any way of framing this database such that it is not clearly morally wrong, and corporations should make their business decisions accordingly to avoid acting unethically. However, I do also realize that other people may have different moral views than I do, and those people may be highly ranked within various corporations. This brings us to the question, Who decides what’s right or wrong for a company? This is not so much of a question of morality–clearly, high ranking officers in a company will be making the decisions of what is right or wrong for a company. However, if that decision conflicts with other employees’ personal moral views, I believe they have a responsibility to do what they can to not be complicit in it, like the tech workers who signed the pledge to never help with the Muslim Registry even before their companies released statements affirming the same thing.

Morality and Ethics of Corporations

Although Kent Greenfield brings up some good points in his article on why Corporate Personhood actually helps in making corporations more accountable, I still disagree with the concept of Corporate Personhood. Just the idea of a corporation being treated the same as a person, when it is so clearly different, doesn’t sit right with me. In addition, I mentioned earlier the inconsistencies with how the courts have been treating corporations with respect to Corporate Personhood–rather than trying to twist our constitutional rights to fit corporations, we should instead create new laws to apply specifically to corporations.

Along those lines, I believe morality and ethics are a more difficult topic for corporations than they are for individuals. Of course, morality and ethics apply to corporations and they should never act unethically. However, since corporations are made of many people, rather than one person, who all have different moral codes, what is “unethical” is a grey area. What seems unethical to an entry-level worker may fit perfectly into the moral code of the CEO. In the end, decisions should be made after long, careful discussions among multiple officers in the corporation about the morality, implications, and consequences of that decision.

Your Blog Will Load After This Ad

I’ve never personally minded advertisements. Sure, when I’m watching television and there’s an advertisement break right at a crucial plot point in a show, I get a little annoyed and anxious for my show to resume. However, my mild exasperation isn’t on the same level as others who have a deep resentment for advertising, especially online advertising.

I’ve never used Adblock, or any other ad-blocking software. To be honest, however, this isn’t because I find it unethical to use this software–I’ve never really given any thought to whether Adblock is ethical or not before. I’ve just never been annoyed enough by online advertising to prompt me to go download Adblock. A large part of this is because I also rarely visit sites that deluge me with a storm of popups and flashy ads; most of my online traffic is limited to Facebook, YouTube, large news sites, and StackOverflow–not sites that will spring a “Congrats! You’re our millionth visitor!” popup on me.

Regardless, I don’t believe using an ad-blocking tool is always unethical, or at least it isn’t a cut-and-dried, black-and-white issue. It’s true–ad-blocking can be detrimental for smaller websites that provide free content and earn their revenue by users viewing ads on their page. In an article published on Ars Technica, the author compares using an ad-blocker to eating at a restaurant without paying. After all, websites are using their own resources to provide us with free content, and we refuse to view the advertisements on their site in return.

However, supporters of ad-block push back, arguing “publishers have brought ad blocking upon themselves by creating a Web reading environment that’s often hostile to readers.” They argue that the deluge of popups that I mentioned earlier are not just annoying, they also can pose security risks and detrimentally affect the performance of the website. Not all websites engage in these practices. However, with so many that do, it’s no wonder that so many users use some ad-blocking tool, especially in light of incidents when compromised ad networks introduced malware into people’s computers. In my opinion, the ethical thing to do is to whitelist websites that are trying to utilize online advertising in a prudent manner.

The other contentious matter surrounding online advertising is the mass collection and sale of user data for the purposes of targeted advertising. While I understand why some people are uneasy with this, I don’t believe it is unethical of companies to sell their users’ data. These companies are providing free services to us, and they need to be able to make revenue. As said on The Atlantic, “People like no-cost services, and are willing to forfeit some privacy in exchange for them.” In addition, these companies are not collecting data without our knowledge–it’s a pretty common knowledge fact that companies like Facebook and Google have been selling our data to advertisers. With this knowledge, users are making an informed decision when they choose to use Facebook or Google. If they are not okay with their data being collected, they can just choose not to use these services.

I Spy

In general, I am pretty ambivalent toward government surveillance. While I understand “Big Brother” style fears, I am usually of the mindset “if you’ve got nothing to hide, you’ve got nothing to fear.” Personally, I don’t care if the government can view my search history (I once inadvertently Googled how to “simultaneously terminate parent and child” while attempting to complete an OS assignment, and joked about how that probably landed me on an NSA watchlist) or monitor who I contact online. In most cases, I am of the opinion that government surveillance can and does help national security, averting threats to the general public. However, I know that not everyone shares my indifferent attitude about privacy and I do understand the viewpoint of others who place a stronger emphasis on the need to protect individual privacy. I absolutely agree with the necessity to engage in constructive dialogue over this issue.

That being said, I don’t believe Apple had any moral obligation to comply with the FBI’s request to build a backdoor into an encrypted iPhone in the San Bernardino case. Tim Cook’s open letter mentions two primary objections to the FBI’s request. The first is that even though the FBI is asking for a backdoor only into one specific phone, that technology could be used to unlock any iPhone, and in the wrong hands, would threaten the data security of countless numbers of users.

Although the FBI did manage to access the iPhone without Apple’s help, by exploiting an existing vulnerability with the help of a third party, they are considering to not disclose the details of the vulnerability to Apple, preventing them from patching it. Just as I believe that Apple should not have to build a backdoor into their product, I also believe the FBI has no obligation to reveal the details of the security vulnerability they found. However, earlier this year, one of the third party companies that the FBI works with was hacked, resulting in some iOS cracking tools being obtained by the hackers and posted online to the public with a warning to the FBI, “@FBI Be careful in what you wish for.” While the most sensitive tools were not leaked, this clearly goes to show that Apple’s concerns are not unfounded.

The second objection that Tim Cook raised is that if Apple is legally made to comply with the FBI’s request, it sets a dangerous precedent for what the government can and cannot do in terms of accessing personal data.

To illustrate my point, I want to draw an analogy, although it is a bit of an extreme analogy. Imagine a slightly more technologically advanced world, one that has fully bought into the idea of the “Internet of Things.” In this world, say that there is a man who wears a pacemaker that is connected in this “Internet of Things.” The manufacturers of this pacemaker can access this pacemaker, and all the pacemakers that they have made, in order to analyze health data and help the medical field.

However, let us pretend for a minute, that this man has been identified as the perpetrator of a horrific terrorist attack, and that he has evaded capture and is on the loose. Instead of asking Apple to build a backdoor into an encrypted iPhone, the FBI demands that these pacemaker manufacturers build a backdoor into the pacemaker, one that does not currently exist, so that they can take control of this man’s pacemaker and fatally shock this man’s heart.

There is still a level of moral ambiguity to this scenario. After all, this man is a danger to the public as long as he is on the loose, and the pacemaker manufacturers have the power to stop him. However, I think most people would be highly uncomfortable with the idea that the government could force the manufacturers to build a backdoor that would allow them to stop the heart of anyone wearing this pacemaker. In the wrong hands, catastrophe would ensue.

I acknowledge that my analogy is quite extreme and a bit of a stretch; an individual’s right to privacy is not as important as their right to life. However, while I don’t mean to fall into the slippery slope fallacy, I do think it is a fair question to ask where the line should be drawn. When is it okay for the government to demand a company to compromise the integrity of their product?

In short, I do support government surveillance in order to protect national security, but I do not support absolute free reign for the government to do whatever it wants to complete this goal. I agree with Obama’s statement that “You cannot take an absolutist view on this.” Both individual privacy and national security are important, and compromises do have to be made on both sides. However, I believe that there needs to be a line drawn somewhere, and as far as the San Bernardino case goes, that line is in forcing a company to create a flaw, one that does not already exist, in their own product.

 

Hidden Figures

Podcast: https://drive.google.com/file/d/0B5Ot2uisSU8wRnBpdGpVd1BaM0k/view?usp=sharing

Thanks to women like Katherine Johnson, Dorothy Vaughan, and Mary Jackson, it is much easier today for women and minorities to succeed in STEM fields. In high school, I dreamed of working at NASA, something that would be impossible if not for the efforts of women like those in Hidden Figures.

However, though they accomplished much by allowing women and minorities to break into STEM fields, their work is not yet complete. Women and minorities still are disproportionately unrepresented in fields like computer science and physics. Some obstacles that persist even today are obvious challenges such as sexual harassment in the work place, but also more subtle hurdles like social norms that suggest women are better suited to less technical fields. The biggest challenge for women and minorities is still actually the lack of women and minorities itself. People are more comfortable with people like themselves; this has the dual-effect of making it harder for women and minorities to enter tech fields in the first place, and then also makes it less welcoming for them to stay. However, the situation is still much improved from when Johnson, Vaughan, and Jackson broke into their fields, and it will hopefully keep improving every day.

Female and minority role models are a huge help in achieving this goal. When young girls and young minority kids see someone who looks like them pursuing their dreams, it shows them that they can do the same. Having a role model is a huge encouragement and can provide motivation when things get difficult. Growing up, my role models were actually my friends in school. Most of my friends were females who loved to pursue knowledge and didn’t hesitate to pursue their dreams–this helped influence me and helped me to gain the confidence to pursue studies in my fields of interest in college. My parents also supported and encouraged me to study STEM fields, helping provide me with exposure to fields like robotics and chemistry at a young age. I’ve been lucky enough to be surrounded with positive influences from childhood that helped me get to where I am today.

When Things Go Wrong

Accidents happen. It’s a fact of life that humans are not perfect, and as such, will never be able to design perfect systems. However, even so, we have a responsibility to do everything in our power to minimize the risks to others and prevent tragedies from occurring. Responsible design and testing is especially necessary for safety-critical systems, when their failure can result in harm or death to humans.

The root cause of the Therac-25 accidents was haphazard system design, a lack of safeguards, insufficient testing, and the AECL’s initial refusal to accept responsibility. Although the direct cause of the Therac-25 accidents was a hard-to-replicate software bug related to timing, the accidents could have still been adverted by hardware safeguards, like those that were in place in the Therac-20. However, for whatever reason, the AECL decided to remove these hardware interlocks that would prevent the machine from being activated in the incorrect mode. They put all their trust into the control of their software to detect dangerous situations, discounting the need for hardware safeguards in case their software failed. In addition, no time analysis testing was done. KILLED BY A MACHINE: THE THERAC-25 describes how the software bug was finally replicated by enlisting the aid of a technician who was an actual end-user of the machine. The bug occurred when the user first selected X-ray mode but then switched to Electron mode before the machine finished setting up for X-ray mode; this caused a race condition where the turntable would end up in an unknown state. This bug was never found in AECL’s previous testing because their testing was all done slowly, step-by-step; their testing did not mimic real-life usage. Even after a patient suffered radiation burns following treatment from the Therac-25, “the manufacturer and operators of the machine refused to believe that it could have been caused by the Therac-25.” It took a second patient to suffer harm before the AECL took any action to investigate. While even the first accident should not have occurred, the fact that six patients suffered injury because of the Therac-25 is a clear indication of bad engineering practices.

Engineers working in safety-critical systems have extremely difficult jobs, because they need to be able to foresee all the potential ways things can go wrong, and then prevent them from happening. When working with these kind of projects, it is important that they understand the gravity of their work. To prevent tragedies from happening, companies have to be okay with redundancy; while adding extra failsafes may use up more resources and manpower, it is always worth it for a safer product. Extensive code reviews are another safeguard; unlike the single programmer who wrote all of the source code for Therac-25, a team of engineers reviewing each others’ code is much more likely to catch bugs like the race condition in Therac-25. In addition, rigorous testing, both by engineering teams and end-users is always necessary. No component of a safety-critical product should ever go live without first being tested. Engineers of safety-critical systems must actively try to search for edge cases that cause system failure, and then prevent those cases from ever occurring.

However, what happens when, even with all these safe guards in place, the system still fails? Can we still blame the manufacturers?

This is a difficult question, as it is still the company and the engineers there that created that product that failed. This is why I believe there needs to be a strict set of standards and regulations, detailing fail-safes, reviews, and testing, that needs to be adhered to when safety-critical systems are in question. If the manufacturer can show that they did everything in their power to prevent the product failure before it occurred, by following these standards, then they should be free of liability; it is not reasonable to punish them for not being perfect, and would only deter others from technological pursuits that could truly benefit the world. However, if shortcuts were taken or corners were cut, then the manufacturers and engineers absolutely need to be held liable for shirking their responsibilities in creating a safety-critical system. In the case of Therac-25, several deaths could have been prevented by better engineering practices, and it is our moral obligation to learn from that tragedy and work to prevent accidents like that from happening in the future.