• Ethics and computing

    Home » Forums » Newsletter and Homepage topics » Ethics and computing

    • This topic has 25 replies, 16 voices, and was last updated 9 months ago.
    Author
    Topic
    #2668044

    ISSUE 21.19 • 2024-05-06 COMMENTARY By Michael A. Covington Computer ethics and AI ethics are easier than you think, for one big reason. That reason i
    [See the full post at: Ethics and computing]

    Total of 23 users thanked author for this post. Here are last 20 listed.
    Viewing 17 reply threads
    Author
    Replies
    • #2668087

      I’ll group my responses, though they are really different Replies.

      (1) One of the biggest ethical issues in the world of computers is how to handle “crimes” committed in virtual worlds. Especially Meta’s Metaverse, where Avatars can interact with each other, and virtual land and virtual objects can be “purchased” and “sold” with real cash or crypto assets.

      There have been claims of serious psychological damages being done when “male” Avatars have assaulted “female” avatars and committed acts which if actual people did them in the real world, would have been considered serious crimes. Criminal and civil charges have been filed in actual courts of law, regarding psychological damages mostly. Judges and juries have so far been stymied about what to do with such charges, especially given the anonymity of Avatars. (We can’t get the real names and locations of the “perpetrators” without the cooperation of the Meta Company, which has so far been totally lacking.)

      So no, computer ethics are not obvious, based on what would be considered crimes in the physical world. Laws and corporate ethics (and Terms of Service) have not kept up to date with end user behaviors in the Metaverse.

      I apologize for any adults-only topics I have raised here, but this is where the contrast between real-world ethics and computer ethics has been on display in the most stark contrast.

      (2) Blaming the architect(s) of the Internet for the lack of authentication is vastly oversimplifying the history (and in some places, the politics) of the development of the Internet and its security and privacy standards.

      In the case of spam and robocalls, again I blame the design of the Internet and the telephone system.

      Again, blaming the technical design is not productive, and it ignores the historical development of both types of technologies. Nevertheless, the implementation of these designs was not well thought out, and could easily be improved. But there is a strong financial disincentive to do so in both cases. Similarly with putting guardrails on user and platform behaviors with regard to social media and AI/AR/VR.

      The ethics of AI and its cousins deserve a separate topic, which they will get at AskWoody.

      ================

      Tricking a machine — in the sense of getting it to work in an unforeseen way — is, of course, not wrong.

      (3) Umm, this is called hacking. Even when we talk of the actual act of modifying computer software or firmware, it is in some situations explicitly illegal in many US States and many countries. Where financial or identity theft is the result, this is called by the same terms as if the trickery had happened with a physical lock.

      As a user of Linux and other Free and Open Source Software, I love it when we are explicitly or implicitly given permission to modify computer code to suit our individual needs. But this permission must be treated as something granted by developers, not automatically assumed if not explicitly denied by the developers and applicable laws.  Whether we like it or not, licensing terms can usually be taken as having the force of law, until proven otherwise in a court of law or arbitration.

      ================

      (4) The British Post Office scandal was not a computer failure. It was a bureaucratic human failure. At the time, most British people trusted their Government way too much.

      -- rc primak

      7 users thanked author for this post.
      • #2669621

        Thanks for your interest.  Briefly:

        (1) That is a good point, and it’s essentially about the ontological status of things in simulated, fictional, or virtual worlds.  E.g., courting and “marrying” a woman in Second Life when you are already married in real life.  I was asked to discuss a case of that a while ago and concluded that it is emotional though not physical infidelity, and conducting it through an electronic communication system under pseudonyms does not change that.  Similarly, theft of things that human beings value in games or virtual worlds is theft.  After all, isn’t money, in its present form, basically something in a virtual world?  Dollars are quantities recorded in Federal Reserve Banks, not physical objects.

        (3) Not all of that is what I meant by “tricking a machine.”  I mean using hardware and software ingeniously, not in breach of promises or ethical obligations.

        1 user thanked author for this post.
    • #2668124

      Thank you for writing this piece. I wonder if those who engage in sketchy and/or self-serving behavior in their offline lives are more inclined to unethical behavior when online. I suspect the answer is yes. Blaming the system or the equipment is a cop-out, and there is no justification for doing something that adversely affects others. This is no different from abrogating ethics in any other context.

      And we now have a new category of users- “influencers”- whose efforts on social media are aimed at self-aggrandizement and often result in financial rewards.

      3 users thanked author for this post.
    • #2668056

      this article requires a fivestar rating for its distilation of right and wrong

      PS  liked the bit about the elephant 🙂

      1 user thanked author for this post.
      • #2694968

        Remember, “there is no law against stealing elephants!”  I think that’s an example of enduring usefulness.

    • #2668064

      Thank you for an excellent piece, drawing attention to the fact that Ethics are Ethics. It’s very good advice that we should moderate our computer behaviour by asking whether this would be our normal behaviour outside of the computer world. I would say this increasingly important with teenage (not exclusively!) addiction of smartphones. Such addicts no longer appear to be aware of the REAL world. They live on another planet called Mini-Screen, where all the inhabitants speak a language called Text-Speak, and are largely amoral.

      • #2694969

        I call this the separate-world illusion.  When administering network acceptable-use at the U. of Georgia I occasionally met young people who were deep in it, to the point of being unaware of other people in the real world.  It can reach a level that is close to what psychologists would call a personality disorder.

    • #2668145

      “That reason is simple: if it’s wrong to do something without a computer, it’s still wrong to do it with a computer.”

       

      Thank You.

      Unless you're in a hurry, just wait.

      2 users thanked author for this post.
    • #2668191

      Not surprised, what do you expect of a generation that buries itself in the video gaming alternate reality where they can do anything they want to others with no ramifications. Kinda explains when they are not good at forming healthy relationships with others.

      You are the result of what you have filling your mind.

      • #2669622

        Well… I’m not going to disparage the young.  There have always been people who buried themselves in fiction, sports, or games of one kind or another.  In fact, my experience at the University of Georgia was that students’ computer ethics was much worse in the 1990s than now.  Nowadays they have more experience and awareness and are guided by the people around them.

    • #2668190

      It was a well written article with many good points.

    • #2668288

      “Misinformation, whether malicious or simply negligent, can now harm far more people, and everyone has more responsibility.”

      I agree, everyone has the responsibility to monitor the information they receive, as to whether they should choose to ignore it, or analyze it and consider it as fact, fiction etc.

      Let the receiver of the information – whatever the information – beware.  This of course is the case with all information, albeit from a family member, friend, acquaintance, or from a named, or unnamed source on the Internet, or elsewhere.

      Known and unknown ulterior motives abound.  Most everyone wants or needs something and they will be motivated to get what they want; some will do so by whatever means necessary, nefarious or otherwise.  It’s a flawed and impossible task to prevent human nature from being…well…human nature.

      Whatever the information is, or from whomever the information emanates, the information could be true, or it could be mis-, dis- or mal-information.  It’s the recipient’s responsibility to make such determinations, not the information conveyors.

      Granted, the advent of the computer age and social media have significantly broadened the potential recipients of information. But that simply means that each and every recipient of information, should be even more aware of their need to judge for themselves the accuracy and intent of the information they see or hear.  AI does nothing but add complexity and compound the difficulty involved.

      This was one of the tenants of the article, things formerly difficult to replicate, are now easily replicated; hence, informational recipient beware.

      1 user thanked author for this post.
    • #2668289

      Thank you Michael Covington –

      Reading your article reminds me of a couple of things I’ve seen/read recently:

      – A PBS show: https://www.pbs.org/wgbh/masterpiece/shows/mr-bates-vs-the-post-office/# (“The computer says you owe us money, so pay up!”), and

      – An autobiography: https://en.wikipedia.org/wiki/Permanent_Record_(autobiography) (by Ed Snowden) – I found it in our local library’s Used Book Store for $2.00 – I’ve not yet finished it, but it is fascinating to hear from the man, himself, what drove him to do what he did – Recommended!)

      1 user thanked author for this post.
    • #2668410

      Consider, for example, the teenagers in several places who have reportedly used generative AI to create realistic nude pictures of their classmates. How should they be treated? Exactly as if they had been good artists and had drawn the images by hand. The only difference is that computers made it easier. Computers don’t change what’s right or wrong.

      Doing an oil painting IS a lot different than a deep fake po rno image. People STILL believe in photographic evidence THAT makes a difference!!

      🍻

      Just because you don't know where you are going doesn't mean any road will get you there.
      1 user thanked author for this post.
    • #2668443

      Well said, my friend!

      Part of the problem is definitely attitude-related.

    • #2668452

      A person I once interviewed for a job came in with a jail record. I asked him about it. “Everybody else was doing it” (guess who didn’t get the job).  It is human nature to find a way to justify actions.

      “I am like any other man. All I do is supply a demand” – Al Capone

      The problem is how to instill personal responsibility.

      Group A (but Telemetry disabled Tasks and Registry)
      1) Dell Inspiron with Win 11 64 Home permanently in dock due to "sorry spares no longer made".
      2) Dell Inspiron with Win 11 64 Home (substantial discount with Pro version available only at full price)

    • #2668483

      My biggest issue with AI, the programmers. As we have seen with Google Gemini, when a programmer is racist, it shows.

      Many many items that must be considered by who is programming AI, and this is not an all inclusive list of items to start looking into, but a start.

      Pro-Life or Pro-Abortion
      Democrat or Republican
      Pro Gun or Anti-gun
      CRT supporter
      Climate Chaos believer
      EV or Fossil Fuel
      Electric Stove or Gas
      Programmers background and family values
      Programmers marital status
      Is the programmer an only child or many siblings
      Does programmer have children
      Military service or anti-Veteran
      Pro-Police or Defund Police
      Where born
      Where raised
      Where currently living
      Programmers music taste
      Programmers movie taste
      Programmers reading taste
      Programmers food taste
      Programmers drink taste (non-alcoholic)
      Programmers drink taste (alcoholic)
      Religious beliefs/faith
      Past and current financial status
      Ethnicity
      Sexual orientation
      Clothing style
      Body piercings
      Body ink
      Single language or multiple languages (speaking and writing)
      Programming language(s)

      As you can see, there are hundreds if not thousands of items to consider who is programming and how AI is programmed.

      • #2668509

        Does AI learn from programmers or from users? Does AI attract a certain type of user who accepts what is being said by AI, while what is being said deters others from using it?

        Personal biases exist everywhere (including me). Not on you list,  I recall telling a teacher that science disproves Influenza vaccination causes Autism and that the vaccine did not give people Influenza. He continued giving students anti-vaccination messages.

        The problem is not AI programmers. The problem is how to improve AI while retaining reasonable freedom of speech and rewarding creative thinking (as opposed to supporting groupthink).

         

        Group A (but Telemetry disabled Tasks and Registry)
        1) Dell Inspiron with Win 11 64 Home permanently in dock due to "sorry spares no longer made".
        2) Dell Inspiron with Win 11 64 Home (substantial discount with Pro version available only at full price)

        2 users thanked author for this post.
      • #2669623

        We have an AI article coming up, but briefly:  The racism (etc.) in some AI systems comes from the training data.  Programmers do not put it in.  Machine learning algorithms learn to recognize whatever patterns they encounter, often very efficiently.

        1 user thanked author for this post.
      • #2694970

        I’m not sure what kind of AI you have in mind.  Large language models (ChatGPT, etc.) are trained on input texts (generally as much as possible), not on the programmer’s opinions.  It is not easy, in many cases not possible, for the programmer’s opinions to be inserted.  As for rule-based AI, it has clear goals and is tested, and a programmer trying to skew it would be noticed.

        Having said that… I am confidentially aware of a situation where a commercial chatbot has “political correctness filters” on its output, to keep it from saying certain unpopular things, and as a result was not able to truthfully report some things said in old religious books that it was being used to summarize.

        Chatbots are not repositories of knowledge in the first place.  Nobody should treat them as such.  They are repositories of language.

    • #2669182

      “Does AI learn from programmers or from users?” “Does AI attract a certain type of user who accepts what is being said by AI, while what is being said deters others from using it?”

      “Personal biases exist everywhere (including me).” “Not on you list, I recall telling a teacher that science disproves Influenza vaccination causes Autism and that the vaccine did not give people Influenza.” “He continued giving students anti-vaccination messages.”

      “The problem is not AI programmers.” “The problem is how to improve AI while retaining reasonable freedom of speech and rewarding creative thinking (as opposed to supporting groupthink).”

      Exactly, “personal biases exist everywhere” and in each and every one of us — ie., we’re human!!

      “The problem is how to improve AI while retaining reasonable freedom of speech”.

      No, the problem is how to retain freedom of speech.  Leaving “reasonable” in the sentence is the problem!

      Your “reasonable” may be my “reasonable”, or it may not be.

      Group together thousands or millions of people and have them apply reasonable to whatever.  One person’s reasonable is another’s unreasonable.  I am not you and you are not me.  I would like to think that we could agree on such precepts as the ten commandments or, at least the golden rule which nicely sums up the 10-commandments.  From there, reasonable is wide open to thousands, if not millions of determinations.

      And CraigSH’s above list is a good start, but of course it’s but a miniscule example.

      At the very heart of our human experience, is that we “learn” how to see the world.  That learning involves adopting biases and prejudices.  We can be easily fooled.  Optical illusions are prime examples.  We see what we are biased to see.

      To pretend that biases don’t exist, or that somehow the bad biases must be eliminated and the good one’s kept (there’s that concept of “reasonable” again), is a fool’s errand and more importantly, it is the stifling of free speech.  That is:  what you say is right and what I say is wrong with either side winning…is where the problems ensue.

      “I recall telling a teacher that science disproves Influenza vaccination causes Autism and that the vaccine did not give people Influenza.” “He continued giving students anti-vaccination messages.”

      This is a telling example.  Because if you line-up any 100 (or more) medical or scientific trained professionals, differing opinions will be present about most any of their particular specialties, even about vaccinations.  To think otherwise is belief, not science.

      You say vaccination causes Autism and I say it does not.  My medical professionals’ opinions trumps yours, or is it your’ s trumps mine?  To decide, do we take a vote?  Nope, in a free speech world, we leave the question open to debate and scientific analysis.  This makes medicine more the art that it is, verses a science that we would like to believe that it is. Thalidomide for example, was marketed to prevent pregnancy nausea, until it wasn’t. Science because of too many horrific birth defects, finally made that determination.

      What if it was decided that the results related to Thalidomide shouldn’t be studied because it didn’t comport with someone’s, or some organization’s “reasonableness” to do so?  Yeah, that would have been foolish.  But foolish act after foolish act of suppression and conformance can be documented throughout history.  That’s why freedom of speech/freedom of information should be sacrosanct and why it is first (1<sup>st</sup>) on U.S. Bill of Rights.

      Free speech is and was considered such an imperative, because the founders fled from its lack.  As an aside, freedoms and freedoms of speech oftentimes conflict with safety and security.  Mentally sound responsible adults should have the freedom to be stupid.  But of course, your stupid may not be my stupid.  Such are the idiosyncrasies of freedom. 

      Science is a continuum, not an end.  The Pandemic saw the word science thrown about far too much – i.e., my science trumps your science.  Quite frankly, actual science knows very little about the world, as masking, 6-feet and the efficacy of the vaccines have exemplified.

      We believe we know more than we do. Because the human brain is wired to believe – i.e., concrete is better than the abstract and belief is better than not knowing.  And what science knows today, will likely change in the future — albeit, months, decades or centuries.

      Mis, dis and mal-information, along with the term hate have been used to censor information, wrongly and radically, in many ways.

      Censorship under the guise of safety, protection, virtue signaling etc., can be easily used to stifle speech; eliminating opposing views and inhibiting debate and mandating conclusions that may not otherwise have been made, if the quashed information was allowed the light of day.

      That’s what science is, the analyzation of a hypotheses based on (hopefully) objective data. Without ALL of the data, good, bad or otherwise, there can be no science, there can be no logical deductive reasoning, there can be no learning, just rote BS from emanating from someone’s so-called sensibilities (or dare I say it), “reasonableness”.

      The bottom line is, we’re going to be emotionally, monetarily and psychologically harmed if we’re not constantly asking ourselves what the motives are of everyone we come into contact, as to what it is they mean or want by their speech or actions.

      The same is true for the information we receive via the Internet or by any other means.  The receiver bares the responsibility, not the informational conveyor — let the informational receiver beware, not the informational conveyor and let ALL information flow.

      The problem begins if AI draws conclusions between for example, the varying divergent conclusions on the CraigGS’s list.  Information isn’t right/wrong, good/bad, it is information.  All information is necessary.  I would hope that AI would provide that – ALL information and let the recipient of same make their own determinations.  Because at the heart of the human experience, that’s exactly what we must do, easy or difficult, like it, or not.  No one has all the answers, or even many of the answers and if they did, the answers may change, because they’re humans not a deity.

       

       

       

       

      3 users thanked author for this post.
    • #2669624

      Thank you all for your support!

    • #2669619

      Dear Michael.  Thank you.  This is a exceptionally valuable and intelligent article.

      1 user thanked author for this post.
    • #2669919

      Let me add my name to the people who found this to be an extremely important & incisive article.  Wonderful!

      1 user thanked author for this post.
    • #2669962

      “That reason is simple: if it’s wrong to do something without a computer, it’s still wrong to do it with a computer.

      See how much puzzlement that principle clears away.”

      Yep.

      Human, who sports only naturally-occurring DNA ~ oneironaut ~ broadcaster

    Viewing 17 reply threads
    Reply To: Ethics and computing

    You can use BBCodes to format your content.
    Your account can't use all available BBCodes, they will be stripped before saving.

    Your information: