Thinking Cap #5 #133
Replies: 20 comments 38 replies
-
I define ethics as what is morally right and morally wrong. Technology and society are intertwined; technology cannot be separated from society. Specifically, technologically can positively benefit society, but it can also have the ability to further exacerbate social conditions in society. I firmly believe that people who have access and know how to use technology should be responsible for reconciling the use of technology and its impacts on society. For example, when people use technology, such as GIS and digital mapping, they have to consider ethics and ask themselves whether the use of this technology is helping or harming a community. It is necessary to ensure that the data being represented in the maps is accurate to a community’s story, but it is also necessary to understand the impact of this technology outside of the scope of just creating a map to solve a problem. Technology impacts humans and society. If it is realized that these tools are hurting a community then people need to rethink their approach and goal to solving a problem. This is especially important when dealing with platforms such as Facebook and Google, who create social media applications and search engines respectively, that primarily focus on capitalism through marketing ads, taking users’ data, and creating algorithms as opposed to focusing on how they can use this technology and their knowledge to help society. Sacrificing society for profit is unethical. Trying to separate technology and society is unethical. My ethical views empower civil society because it takes a people-oriented approach to using technology and it promotes accountability. Society and the people should always be prioritized. People who use/create technology and follow my ethical views (or even consider ethics) should put people in the forefront and understand the responsibility that they have to help society and the people. Doing so would play a role in alleviating technology’s harm on society. People should always use technology for good (instead of evil). |
Beta Was this translation helpful? Give feedback.
-
As I've learned more about deeply rooted societal issues and grown to become more socially aware/present, my views and moral obligations have expanded. While I think that we all have our personal basic level of ethics I also think there is room to build and add to what we consider right/wrong or good/bad. I feel like the relationship between society and technological ethics also has to be recognized as something ever-changing. The growing availability of technology introduces an increasing possibility and likelihood for ethical practices to slip through the cracks. I make this statement in reference to large corporations as well as to individual users because both can readily produce, for instance, web maps or spatial data-based articles that actively and negatively affect communities/humans. Regardless of intent, anything produced using technology and sent out into the world is going to have positive and negative impacts within society. An unethical power relationship common to see is the instance where technology has created the opportunity for something, this is monetized by a large corporation, and then they turn around and exploit their workers to turn a larger profit. Examples can be made out of companies like Amazon, Uber, Doordash, and more because they actively take advantage of their employees or "employ questionable tactics" as is stated by Indian Lee in "The Growth of Corporate Exploitation Worldwide in 2020." The decisions made by these companies affect people and communities that may rely on this door-to-door innovation to make extra money, support their families, and more. When it is considered "normal", or I'll even go as far as to say acceptable, for workers to be denied rightful tips, bathroom breaks, and safety at work then it should be recognized that there is a problem. Yes, Amazon provides convenient services to its users and makes a hefty profit while doing it, because technological advances/innovation has allowed them to do so, but the people who keep amazon running do not receive the same advantages. To touch on an earlier point, ethical practices and accountability often take a back seat to profit, and in the process, this sacrifices the well-being of human beings and society as a whole with little to no real consequences. While my example is not directly tied to web or digital mapping I think the same conclusion can be drawn when thinking in terms of politics, the government, and more who utilize GIS and data to promote their own agendas at the cost of anyone and everyone. My ethical views work to empower civil society by emphasizing accountability, bringing focus to the community level, calling attention to the importance of listening, and ultimately encouraging all of these things in relation to the employment of technology since it is one of the reining pillars in modern society. https://www.volusion.com/blog/the-growth-of-corporate-exploitation-worldwide/ |
Beta Was this translation helpful? Give feedback.
-
I identify autonomy and a diversity of intellectual ideals as the main ethics that meet at the center of society and technology. As an ever-rising juggernaut within our daily lives, technology should be perceived as a medium or tool instead of the main driving force in our lives. In spite of the plethora of information and learning possibilities available, systems of neoliberalism make it so that any access we desire is prefigured by an ad, paywall, or Google analytics software that gathers more of your personal information. Profit should not beget learning in any capacity. This profit motive, rightly identified in Paris Marx's Jacobin__ article, appear to advance a line of reasoning that places focus on "the invisible hand" and laissez faire politics for technological development. When this motivation is so blindly followed that it pushes a significant portion of an American electorate to believe in wild conspiracy theories, is this still a beneficial goal? |
Beta Was this translation helpful? Give feedback.
-
Society and technology have to grow together, but it can only be done so if technology is able to be open enough for communities to insert input but private enough in order for individuals protect themselves. With the creation of open source platforms, users can come together as a collective whole to uplift one another and help each other out. Users can also educate one another or showcase their perspectives on those platforms in order to create a progression not only beneficial for the understanding and clarity of issues and bring civil society together. However, I do believe that to some extent, technology can harm society more than society can harm technology. For instance, technological approaches of capitalism can be harmful because it can invade the privacy of individuals. Another thing that can happen is that technology can also misrepresent the data, which can create a dangerous interpretation that can only hurt marginalized communities. Data without narratives attached lose its meaning for communities, and it’s why many perpetuations of disempowered groups still exist. Society and technology have to grow together in order to create progression of education and better community conditions. |
Beta Was this translation helpful? Give feedback.
-
Daryl Koehn and Dawn Elm stated that, "we do not study ethics merely for the sake of clarifying problems. We study ethics with a view to living better, fuller and wiser lives." (Grobman & Ramsey 2020) If ethics is to live "better, fuller and wiser lives," we ought to ask what constitutes these clarifications for our lives? A more pressing question to ask is, "Why isn't community a part of this argument for ethics?" Technology has always been bred by people as a means to ease productivity and to exploit other individuals. The flexibility of the usage of technology will always be intertwined with anything that we do with society; it has been since the inception of human civilizations. Technology, such as granaries, aqueducts, and writing, have been considered beneficial to all members of the society, yet there are those who strictly benefit from these technologies and those who have labored to provide resources to other members of the community (not barring communal societies where work can be equally shared, but even still, disparities are to be found in this format) It is simple to abstractisize actions by labeling them into the binary category of "good" or "bad," with the "good" being supportive of social norms and elevating societal members to be "better versions" of themselves, while "bad" is going against social norms and harming members of the community. Yet, it is difficult for there to state what is an objective good or an objective evil if everyone has different agendas and perspectives. For some, civil societies may never be a part of ethics, as ethics rely on an individual basis. But I believe in an ethic that encourages different perspectives in order to actually improve society through multiple manners, not pigeon-holing oneself to a certain methodology. My personal ethics involves somewhat of Artistotelian Ethics, in which that I do believe that there is a state that humans achieve when we fully achieve a "good life"; that state is called eudaimonia. In a good life, we achieve that perfect happiness, in which allows us to do good. Yet, we might feel that we are flourishing in many aspects of our life, but in reality, it does not align with our telos, or our purpose in life. This is not the true happiness that allows us to reach eudaimonia because it lacks virtue (Dimmock & Fisher 2017). In everything that we do, we ought to be virtuous. Where my argument potentially falls from Aristotelian Ethics is the argument in which defines virtuosity. Virtue, for me, is defined as that in which benefits civil society. A society that disallows predatorial behavior, reliance on simply technological and philosophical means, and reductionism is a virtuous society in which true happiness resides in. I ascribe to a Thomistic ideology in which Aquinas comments from Rhetoric Bk. II “to love is to will the good for someone (amare est velle alicui bonum)” (Gallagher 1999) Although, I disagree with the wording of "to will" means to do good on to others (for who are we to dictate our ideation of "good" to someone?) But rather, we serve the community through people-based organizations and interdisciplinary studies. Inclusivity in viewpoints is exceptionally crucial when wanting to do what is right by a community. To elevate civil society is to listen to the needs of the community and to serve them. For ethics often lacks in the social connectivity that binds us all as humans, which is the nucleus of civil society. RESOURCES Dimmock, Mark, and Andrew Fisher. "Aristotelian Virtue Ethics." In Ethics for A-Level, 49-63. Cambridge, UK: Open Book Publishers, 2017. Accessed May 3, 2021. http://www.jstor.org/stable/j.ctt1wc7r6j.7. Gallagher, David M. (1999). Thomas Aquinas on self-love as the basis for love of others. Acta Philosophica 8 (1):23-44. Grobman, Laurie, and E. Michele Ramsey. "Ethics." In Major Decisions: College, Career, and the Case for the Humanities, 158-67. Philadelphia: University of Pennsylvania Press, 2020. Accessed May 3, 2021. http://www.jstor.org/stable/j.ctv16t6dzr.19. |
Beta Was this translation helpful? Give feedback.
-
In my opinion, though technology is a tool that is neither inherently good nor bad, technology and society both have the capability to mutually reinforce systems of inequity and oppression. While data sets and algorithms are not inherently oppressive, the ways they are envisioned, created, developed, and implemented may very well be. In a very technologically-deterministic manner, technology, data, and algorithms have been used to justify racist and oppressive policing. This can occur when tech is concluded to be able to enact justice -- under the assumption that an inherently "objective" tool such as data can be what produces law and order. PredPol, a predictive policing software that was developed by a UCLA professor, Jeff Brantingham, uses mathematical models that have been used to predict earthquake aftershocks in order to predict future crimes. While original predictive model for earthquakes may not inherently be good or bad ethically -- they could even be used to provide a benefit to sicety as a whole -- the way in which this technology has been adapted and used is incredibly oppressive. On its face, data can seem to be objective and devoid of sociopolitical bearing. However, to assert that patterns of crime can be simplified to map onto seismographic patterns is incredibly reductionistic and technologically-deterministic, for fails to account for the societal structures that drive crime and that construct crime. Technology, in this way, turns a blind eye to the systems of oppression and denies both the existence of systems of unjust criminalization, as well as its role in perpetuating them. At the same time, police departments' and society's use of this tool reinforces its bias. The nature of this algorithn primarily predicts future crimes to occur where previous ones have already happened. While this works effectively for earthquakes, this does not properly predict crime save for subjecting certain neighborhoods and areas to heightened police monitoring and surveillance -- revealing an increased amount of criminal activity in these areas -- but by and large a result of police use of this tool. When police are encouraged to look for crime that is predicted by PredPol and other softwares -- this creates drastic biases in the distribution of "criminal activity" and policing activity. This can be used to reinforce and be reinforced by disproportionate amounts of policing of certain racial-ethnic communities over others. In this sense, society's use of tools, desire to use data for "objective" means, and own definitions of crime perpetuate and permit the oppressive use of tools such as these. Police use of data such as this reinforces this flawed reliance on technology to produce "objective" outcomes. Software such as PredPol, Palantir, and Operation LASER are used extensively by police departments like LAPD to justify police presence. The assertion that technology can make policing and law enforcement more effective and further encourages the militarization and increased spending on the carceral system. This understanding of technology and the way it can create more oppression is key to civil society and community organizations like Stop LAPD Spying that are critical of the role that technology and data can play in our criminal justice system and the carceral systems that operate in our society generally. The recognition that technology itself is not inherently good nor bad but -- can be used in violent ways -- is key to challenging the power it has over the systems that uphold anti-Blackness in our punitive systems. To critique technology in this way is to challenge technologically-deterministic understandings of justice and recognize that technology is not infallible nor all-powerful. Rather it has its own pitfalls and to create a society that is more civil, more just, and more equitable, a person-focused system that places emphasis on social change that is supported by technology -- and technological change that is supported by society -- must be instated. And abolitionist organizations like this that are skeptical and interrogative of the tools being used to currently enact justice are powerful and necessary. |
Beta Was this translation helpful? Give feedback.
-
I wonder how far back do we cover the term "technology". Technology generally means an application of knowledge. Then can we say a car is good or bad? The article by Paris Marx also mentions how people either ignore or condemn bicycle. Don't cars and bicycles help society by efficiently transporting people? However, those transportation may also harm people when they are crashed. I think this is when ethics plays a part and people start to recognize how to transport people while maximize safety. Hence, regulation like seatbelts and helmets are enforced, which is for safety sake. Maybe because I grow up in a Buddhist community, I always think that things, like technology and society, exist as they are, and people have to find the right balance. I believe that both society and technology are existed and improved by another, while one can also harm another. But they can definitely coexist (this sounds like yin and yang). We then should regulate technology to a certain limit so that it does not improve the society, while not ignoring them so that technology completely control society. One example of how they can be brought back to balance is the recent US election. People used social media to ignite hatred among us, but society and big tech companies then limit damage from technology by banning accounts and flagging inappropriate posts. This shows how society can bring technology back when it gets out of control. In terms of empowering civil society, I think that aligns with how I think that as long as people realize how to make the society balance with technology, then they can improve another. If either one goes down way too much, the other will bring it back to balance. This is from my assumption that people, who are products of society and technology, always look after society and technology, and they always unite together to bring back the balance (insert thanos meme). |
Beta Was this translation helpful? Give feedback.
-
I believe in a dialectical approach to ethics. Because we all have implicit biases and different experiential backgrounds (educational, cultural, etc) I think that an openness to critique is essential. Theoretical knowledge is irrelevant without being in conversation with experiential knowledge, which should take precedence to avoid technocracy. |
Beta Was this translation helpful? Give feedback.
-
I believe that society and technology are intertwined, and that ethics means the difference between what is morally right and morally wrong. I disagree with technology determinism, because I think that technology cannot solve all problems inherent in our society, and it can sometimes even perpetuate these problems. Just like how eating too much of one food can be bad for you, I believe that the same goes for technology. For example, facial recognition is a very hot topic in the tech field. Many people claim that it has great benefits for society, for example allowing for more accurate criminal profiling or increasing security, but there are many possible negative impacts as well. If used incorrectly, it can perpetuate racial profiling, something already prevalent in our society. Not only that, but because of the fact that technology is developed by humans, it is created with people's unconscious biases in mind. Going back to the facial recognition example, there have been many cases where this technology has caused a lot of harm in people's lives because of the way that it was programmed. When tested with law enforcement agencies, they found that it misidentified many people of color, and this led to many false arrests. One really interesting article I read was this one by Alex Najibi, where they analyzed the accuracy of various facial recognition software, and found a disturbing disparity between the accuracy rates of the software when using it to identify a white male versus a POC. Most facial recognition software averaged 90% accuracy when identifying white males, but the accuracy dropped an alarming 34% when identifying females of color. I think that when people decide to utilise technology within society, they must find the right balance. People also need to be aware of the possible impact that the use of technology can have on the community around them, and weigh the pros and cons before making any decisions. Just as I had stated before, society and technology are intertwined, and using technology can have negative impacts on society if used incorrectly. For example, when using GIS and making maps, it's important to really understand the data you are trying to display before you begin your project because the way you choose to analyse and show that data can have a big impact on the community of which the data comes from. As a person who creates software, I believe that it important for people like me, who understand how programs are made, to be held responsible for what we make. It isn't enough to just release the code to the public and say that I have nothing to do with what other people do with the code that I created, we must thoroughly think about what the technology can be used for and drive it towards something that can benefit society. I believe that my ethics empowers society because it forces people to take responsibility for the things that they make. When people stop and think about the impact the things they create can have on the community around them, it can drive them to make something that will positively benefit our society, instead of making something for their own profit. |
Beta Was this translation helpful? Give feedback.
-
My ethical views stemmed out from the biblical teachings of Christianity, or I hope it does. There may have been tweaks since even religion sometimes needs to adapt to the society and era in which it currently resides. I believe ethical views are shaped by the person's perspective in life and other factors that may have impacted the person's choices in life. There is a right and a wrong, but even those terms can be subjective. My pastor preaches that there are only "good" and "Godly" which I do agree with. Everything that we do is good, but not everything good deeds that we do is Godly. For example, in the technology sector, we can build the most advanced AI technology to solve housing problems. The first solution is AI would map out areas that could potentially be people's new homes, but they are located far and remote from where they are originally located which the people do not want. The second solution is to use AI to help urban planners and architectures to design housing that can accommodate people with reasonable pricing near the original location. Breaking it down, the first solution is good and it helps people find housing, however, it strips away the autonomy of the people by giving them only an option to relocate far and remote in order to find a house. The second solution is good and can potentially be "Godly" by genuinely helping and listening to what the people want, us as the playmakers can adjust our innovative housing plans to fit the need of others. There is a miracle from the bible that Jesus performed a miracle that multiplied the little food they have to be able to feed thousands. Imagine, Jesus wants to help and performs a miracle for these hungry people, but instead of providing food, He provided clothing instead. Society and technology often times have a good purpose to help humans live their lives better and more efficiently. So in general, I don't think people that innovate in tech necessarily are bad people that want to rule the world with their tech and steal all their users' data (maybe this me trying to have a positive outlook). Again, it goes back to perspective because everyone has different opinions on what is good or bad and in what environment are they defining these terms. I hope that as society moves forward into this new technological era, leaders in this space will constantly remind themselves of the potential impact of their technology and by bringing their products into the market, "Will it impact the world positively or would it leave those that are behind more behind?". |
Beta Was this translation helpful? Give feedback.
-
The only ethical frameworks that I'm really familiar with are virtue ethics, deontology, and utilitarianism. The context through which I'm familiar with those frameworks are mostly under the idea of guiding technical development and ensuring a level of quality to the final product. More specifically, those frameworks were taught to me as ways to identify ethical lapses that contribute negatively to the quality of some engineered product and are mostly utilized to encourage whistleblowing or to judge the ethics of technical endeavors. I'm not really familiar with a whole lot of ethical frameworks that relate technology directly to society, even after the readings since I think the frameworks discussed are pretty nuanced and detailed and not suitable for me personally, who prefers a much more generalized framework since it's just simply easier to think about technology that way. In terms of society and technology however, I'm a firm believer in the idea that technology, in its forms of applications, algorithms, and artificial intelligences, are all tools without any real inherent moral goodness of badness of its own. I'm going to contradict this belief with another opinion I hold, that every developer and every user of the technological product shapes the moral and social value that the product can contain. Good technology therefore, would be things that were developed by people who intended their creation for some good purpose (whether it be Wikipedia and finding information, non-profit web projects, etc.) and are then utilized by society in a good way (to pursue education, enrich their lives by giving them access to aid of some form like monetary or food-related or medical, etc.). This would mean any morally bad piece of technology would follow under at least one of two categories, that it was developed for some purpose that's negative to society (data analytics that exploit vulnerable groups for certain goods/services) or if it can be (and more importantly, demonstrated to be) utilized for a negative purpose (NFTs with heavy carbon footprints, specific machine learning implementations that require excessive amounts of data that would be unhelpful to the algorithm directly). Attempts to reduce the maliciousness that certain technologies can present to society, namely by putting restrictions on technology itself and passing legislation that bans usage of algorithms would present a good to society by preventing bad from being done, but I don't necessarily agree with it. I personally believe that leaving the door open when it comes to technology, in all forms good or bad, has some merit to offer to society. Efficient development techniques can be analyzed and harnessed on more moral endeavors than bad ones. Algorithms that project discrimination can be reworked and further refined to reduce the inaccuracies of the system. Best way to improve the good done to society through technology is to encourage positive interaction with technology, promoting education on the matter and trying to convince people that there is good that can and should be done, in some cases by them personally, when developing and using tech. |
Beta Was this translation helpful? Give feedback.
-
Ethics by itself is a very nuanced topic despite being seemingly simple: what is right is right and whats wrong is wrong. Though, when you try to understand ethic within society and technology, it can get a little fuzzy. Technology and society nowadays are incredibly intertwined, there is not one without the other. Society and our communication with one another is now contingent on the advancement of technology and how we use the tools of technology to inform our decisions. As GIS professionals, it is incredibly important we are cognizant of our positions of power and make informed, ethical decisions. Our use of geodemographics and the surveillance of trends, styles, and choices that are used to inform future trends, styles, choices etc can be unethical if not treated with care. As GIS professionals, we must ask ourselves who our work is empowering, what story is it telling and who's is it, and what message we are sending to the world. Making ethical decisions in GIS means designating power, skill, and resources to uplifting communities in need and coming together to find solutions that aren't necessarily tech-oriented, because often technology only plays into a larger global capitalist framework that actually hurts marginalized communities more. |
Beta Was this translation helpful? Give feedback.
-
My view of ethics is shaped by the concepts of justice and equity. From that lens, ethical behavior is one that both advances AND protects the human rights of everyone, equitable and fairly. Hence, I do not see 'do no harm' as the only landmark of ethical behavior. Ethical behavior must also be proactive in advancing the greater good and in furthering the undoing of harm and suffering (my definition of Suffering is informed by Buddhist philosophy as much as a Marxist view of the dialectical materialism). I am firm in my belief that inaction is reproachable, especially so when we are aware of the unjust circumstances around us. When it comes to society and technology, thus, my view is that for the use of technology to be ethical, it must be purposefully advancing the common good. I think technology is just a tool and so there is no use in qualifying it as good or bad. It is about the user-impact -- explicitly saying impact as opposed to intention because in the real world, harm is harm no matter whether you intend it or not. The article from The Jacobin aligns with this idea, as the author eloquently describes the pitfalls of labeling social media as evil simply hides the larger structures in society that dictate the true impact of social media use/data capture/data consumption. I would say that my view of ethics is one that inherently empowers communities by placing action at the core. Because the ethical use of technology is one that actively seeks to improve the material conditions of the communities we work for, we have a chance to directly empower and benefit communities. What change means needs to be determined by the community itself for the process to be fully empowering, highlighting the synergistic interplay of ethics and decolonialism. |
Beta Was this translation helpful? Give feedback.
-
Ethics is not a singular issue. The ethics we generally practice as a society are not the same set of rules and beliefs that exist in technology. Technology acts as a medium through which human beings can make (or not make) ethical decisions. By itself, technology is cold and unfeeling and bias is a bi-product of how it was created. Technology is devoid of any morality on its own and is instead shaped by developers and users alike. Ethical technology is a concept that is likely never going to be achieved. Developers, try as they may, will always create biased products and where there is innovation there is room for abuse by users. However, I believe that using technology to generate some moral and social value is what creates a sense of virtual ethics. Platforms such as Wikipedia and non-profits like Khan Academy use technology to provide services that positively impact millions. The moral framework of technology is molded by thousands of developers and hundreds of companies. However, certain companies do have a monopoly on certain tech practices. Artificial Intelligence - field strife with innovation - is a target of technological bias. Algorithms and A.I. developed at Google often target minorities in instances such as facial recognition for criminal detection. There is a clear indication that technology is not perfect and steps need to be taken to prevent discrimination. Ethics can help move forward in civil society by serving as a baseline for future innovation to be based off. Rather than creating newer technologies for the sole purpose of profit, we should look into how technologies can be harnessed to do things like promote education, prevent discrimination, and create opportunity. I believe that if we look towards pursuing utilitarianism and ensuring the greatest good for the most amount of people we will find that data infringement is not part of the picture. Technological ethics (be it through legislation or coercion) should be the standard for achieving this greater good. |
Beta Was this translation helpful? Give feedback.
-
It’s impossible to look at technology as separate from people’s intentions, and also hard to make any broad statements about “technology”, since tech is defined by how it’s used and therefore the people who are using it. Ethics are complicated and highly dependent on context, but to begin to look at the ethics of a tool we can first look at people’s intentions and then look at the tools they might use. For example, while a discussion board is not harmful by itself, shutting down 8chan was justified because it was a place where people spread harmful ideologies. Technology is an extension of power, (and regulating tech is a way to regulate people) so when thinking about the ethics of tech, it’s important to consider whose power it serves. Following from that, it’s hard to predict the purposes that a particular tool will end up being used for; even technologies originally created for military purposes, like radar and GPS, can end up improving people's lives. I think one responsible approach is to make something that has a directed purpose, e.g. making a particular nonprofit’s work more efficient, so that you are actively empowering a group. It’s also important to emphasize education and transparency when considering ethics; for example, Paris Marx discusses how social media ad targeting doesn’t work as well as the general public might believe, because companies overstate how accurate they are. When people think that algorithms are all-seeing, responsibility gets shifted away from the people making the technology and onto the technology itself and misleads them about which figures they should be aware of. Education is important to shift people away from these views, which can feed into techno-determinism; I can see the transparency of open-source software helping in this regard, although it definitely isn’t a silver bullet. |
Beta Was this translation helpful? Give feedback.
-
The Week 5 lecture was my first introduction to techno- and social determinism, and I'm still trying to understand where my own ethics lie on that spectrum. Generally, I believe my ethics are more consequentialist, with an emphasis on social justice and equity. An ethical act has the most positive end result for the most marginalized people. |
Beta Was this translation helpful? Give feedback.
-
My life philosophy in general is to minimize any harm I do and help others when I can! I will add that even though those were the values that drove me for a really long time, my views on technology’s relationship to society were heavily influenced by the insularity of Silicon Valley since that’s where I grew up. I appreciate this class for introducing me to different perspectives and inviting me to think critically about them! Before this class, I think I’d say that I believe that technology should have a neutral to positive effect on society. Now, I feel like that is much too general and reductionist a statement since technology and society shape one another. I don’t think technology can be truly neutral since there are consequences that might not be immediately apparent to someone designing the technology. As we’ve discussed in class, impact does not always match intent, and social good initiatives may harm the communities they’re attempting to empower. Other invisible consequences include the labor and materials that are extracted from other countries and the exporting of e-waste back to those countries. I like the ideal of “establishing a dialectical relationship, which modifies both internal and external perspectives,” which the Crampton paper describes as the last stage of ethical awareness in a discipline. Bridging the gap between developers and people who are affected by those technologies with communication and feedback would help empower civil society. It would balance out what the community actually wants and needs with what is feasible to create. |
Beta Was this translation helpful? Give feedback.
-
I must preface my ethical views when it comes to society and technology with my ethical views at large- ethics defines what is right and what is wrong. But, as the readings and lecture have shown, ethics are a more nuanced field than the aforementioned binary would suggest. Before this week’s lecture and readings, I was of the mind that technology has an onus to improve social problems. But I now realize this is a problematic mindset that can lead to techno-facism. Realizations from this week's content lead me to assert that when it comes to society and technology, my ethical views are that we must operate in holist systems that work towards the common good of society. These ethics empower civil society by combining all aspects of technology and society to create comprehensive solutions to multifaceted issues. I subscribe to ethics of inclusivity- all elements of society (art, environment, etc) must be utilized to work towards the collective good. This, of course, begs the question - what is the collective good? Who defines the collective good? My response to that question would be that civil society, the community- particularly the most marginalized communities- should define that collective good to which holist systems (combining the merits of technology and all other aspects of society) can respond and work towards that defined goal. In a nutshell, I believe that my ethics regarding society and technology are defined by what the most marginalized communities perceive as the common good. These ethics empower civil society because they are rooted in the needs of the community, and use those needs as guidance for holist solutions. |
Beta Was this translation helpful? Give feedback.
-
(Apologies in advance for the last minute post week 6 is kicking my butt.) Similarly to a lot of other responses, I also believe that technology itself is a tool, and whether or not it is used to further "good/evil" depend entirely on who is wielding to tool for what purposes. Hank Green's book series (especially his second book, "A Beautifully Foolish Endeavor", is a great realistic sci-fi critique on the ideals of technological determinism. In the novel, a new technological advancement allows individuals to experience another person's consciousness. The tech is promoted as encouraging human empathy for marginalized communities, as privileged individuals can now fully "walk in another's shoes", but in reality the tech relies on and exploits the labor of the workers who need to create/imagine luxurious experiences for the rich to live in. I think this is a pretty good parallel of how technological advances work in the real world, there is always this hope of further connecting humans and encouraging collaboration, but when access to that technology is 1) not universal and 2) not regulated. It is very difficult to use technology to promote equity when technology itself is not yet equitable. Although I am not a philosopher, I would say that social and technological ethics should be framed around equity. One of my favorite quotes from the book is "The most impactful thing you can do with power is almost always to give it away.” To me, the key to empowering civil society is not to solve the problem for the community, because that doesn't address the existing power imbalance that were the catalyst for the problem in the first place. It must be that social policy/technological projects increase the power and autonomy of the community involved. |
Beta Was this translation helpful? Give feedback.
-
My views on ethics when it comes to society and technology is similar to what one of the readings details, "Capitalism". The forefront of technology is exploitation. Old saying goes, "If you're not paying for it, you're the product." Companies place money before ethics because it maximizes profits, shareholders happy, CEO happy, but you know what? SOCIETY SUFFERS. I believe that in order to keep companies accountable as we all know they won't be imposing truly ethical and moral guidelines themselves, we(in terms of the public) need to be demonstrating and protesting to congress. (But then that get stopped because of corporate lobbying and sometimes even worse when it tax funded lobbying). Most of us grew up during the .com boom, meaning technology is a central pillar to our daily lives and its effects are immeasurable. As I reap the benefits of these technologies, I tend to forget the other dark parts of technology that is very unethical and immoral. Every piece of data collected from users is always going to be used, whether it be an algorithm, modelling, identifying trends, etc. Palantir with its very questionable and immoral connection with the United States government is just one example of technology hurting society rather than solving a problem of it. I believe that data itself does not have bias, but data collection and data interpretation is another story. It is up to the person to interpret the data, of which there is no impartial way of determining right or wrong. This is why ethics is so important to technology, and cascades down to society. But it's not just one person's point of view that matters, there are many times that something is created for the sake of improving people's lives, but can cause the opposite of what was intended. Collectively, we have an obligation. I really identified when Albert says that technology alone cannot fix society. I would love to be able to build an application, magically fix systemic oppression, biased policing, but of course that doesn't happen in a simple app. |
Beta Was this translation helpful? Give feedback.
-
Due 5/4
Note: Responses submitted after Monday 2pm will not be eligible for awards.
Consider what are your ethical views when it comes to society and technology? How do your ethics empower civil society?
Completing the readings will be particularly helpful in this thinking cap.
Submission
Make a new comment below and respond to any students comment before class begins.
Beta Was this translation helpful? Give feedback.
All reactions