• Patch Lady – 31 days of Paranoia – Day 26

    Home » Forums » Newsletter and Homepage topics » Patch Lady – 31 days of Paranoia – Day 26

    Author
    Topic
    #227514

    Our next topic of paranoia is one that there is more paranoia than there is reality:  Being concerned about automobiles being hacked.  Sure there are
    [See the full post at: Patch Lady – 31 days of Paranoia – Day 26]

    Susan Bradley Patch Lady/Prudent patcher

    6 users thanked author for this post.
    Viewing 22 reply threads
    Author
    Replies
    • #227520

      I can appreciate any new technology on the road or at a desk. But for now I’m in the driver’s seat behind the wheel and at the keyboard. That’s my personal preference. Either way, I’m living just a notch above a horse, saddle and compass. The comfort comes when I’m at the controls, yet I realize one day I’ll have to trade that horse and buggy in for a Rastro and a Jetson Starship E model. It’s worth it for the time being to put paranoia in the back seat while I can be sure footed and breathe easy. Taking one mode of transportation at a time here. Win7 is running great thanks to Woody and crew, the Jeep starts when I turn the key and manually shift to first gear. Rough terrain ahead but 4WD takes care of that.

       

      Win7, GroupA/B, Mac OSX, Jeep circa ’90s, everything runs with one human at the controls, a little help from my friends here at Woody’s place and at the local car shop.

      MacOS iPadOS and sometimes SOS

      5 users thanked author for this post.
    • #227521

      This links to an F.B.I. Public Service Announcement on automotive cybersecurity issues, that refers to July 2015 concerns:

       
      FBI warns on risks of car hacking
      March 18, 2016

       
      The FBI and the US National Highway Traffic Safety Administration have added their voices to growing concerns about the risk of cars being hacked.

      In an advisory note it warns the public to be aware of “cybersecurity threats” related to connected vehicles.

      The public service announcement laid out the issues and dangers of car hacking.

       
      Read the full article here

      6 users thanked author for this post.
      • #227684

        So, how many cars in the US have actually been hijacked while in use on roads and highways?

        Lately, the FBI has issues a lot of warnings about Proof of Concept attack modes which have not yet (and may never) resulted in even one real-world attack. Some of the older FBI warnings in this category go back a few years. So where are the in the field attacks?

        -- rc primak

    • #227522

      I am reminded of these lines of dialog from Walter Bernstein’s screenplay for the 1964 film “Fail-Safe” (faithfully adapted from the 1962 novel of the same name by Eugene Burdick and Harvey Wheeler):

      KNAPP: “The more complex an electronic system gets, the more accident-prone it is. Sooner or later, it breaks down… A transistor blows, a condenser burns out. Sometimes they just get tired, like people…”

      GROETESCHELE: “But Mr. Knapp overlooks one thing. The machines are supervised by humans. Even if the machine fails, the human being can always correct the mistake.”

      KNAPP: “I wish you were right. The fact is the machines work so fast; they are so intricate; the mistakes they make are so subtle that very often a human being can’t know if a machine is lying or telling the truth.”

      And that was more than a half-century ago… .

      7 users thanked author for this post.
    • #227529

      Already have had hacks on some cars and car jackers have certainly grown more technical in recent years. We don’t have many self driving cars on the road and those that have are not providing enough data to sustain what happens when tens of thousands of them are on the road. Nor do we know the attraction all those types of vehicles will be to hackers, or our enemies. Oversight seems way behind the advancements, and just like the FAA and drones we may be moving too fast in pushing this stuff into the public without knowing how a large population of them will affect us. I also think of the Spock logic vs human logic interacting with self driving vehicles. Human’s sometimes act illogical while computers may act differently or logically programmed. This could be a significant road block in introducing self driving vehicles to the road in large numbers. To coexist with illogical human’s may be the biggest hurdle.

      5 users thanked author for this post.
      • #227685

        Can you reference even one attack which was not part of a “proof of concept” or a carefully staged tech publication or news channel stunt?

        -- rc primak

    • #227533

      In a world of driverless cars so dependent on software, it only requires one well-timed (say rush hour) release of a hack to cause hundreds, thousands, or millions of fatal accidents all at once.

      They’ll be prying the steering wheel out of my cold, dead hands.

      Fortran, C++, R, Python, Java, Matlab, HTML, CSS, etc.... coding is fun!
      A weatherman that can code

      6 users thanked author for this post.
      • #227686

        Again, these “Proof of Concept” stunts have been around for nearly a decade now. Where are the real-world attacks? Doesn’t the lack of real-world attacks so far tell us something about the feasibility of an actual in the wild attack in the future?

        Sometimes the fear of an attack is more effective than the actual attack. So we should look at who’s spreading this fear now and for the past decade or more.

        -- rc primak

    • #227536

      My advice:

      Think twice before loving the idea of handing someone else control of your life. It’s a pretty safe bet that they only really care about their own.

      Giving something else control?

      ToErr

      -Noel

      7 users thanked author for this post.
    • #227542

      Slight edit needed in the article, hook > hood?

      1 user thanked author for this post.
    • #227539

      We are getting to the point where the internet is becoming an essential service. It is one of many utilities that business, government and the consumer considers a necessity. Internet connectivity is in all new private and commercial vehicles, to provide some sort of IOT convenience. We are being told that AI is the future for all vehicles, so like it or not, it’s coming. Hackers are attracted to anything that society considers essential. There is no better place to sow chaos or extort a ransom.

      Serious hacking is mostly done by state actors. They are more likely to hack the utility itself. Organised crime is the next on the list when it comes to hacking on a large scale. They are more likely to hit businesses or a city government. I would hazard a guess that they need only threaten to hack in order to get what they want. Eventually, but sooner rather than later, school buses, city buses, commercial vehicles and even big rigs will have some sophisticated AI presence. No human at the wheel.

      The obvious advise I’d give to vendors is to stop installing IOT/AI in any and all vehicles until product wide security standards are agreed to and made mandatory. Standards driven by a private company or consortium of like minded companies is not acceptable. They can evade security measures that their own products can not meet. They will also put profits before security.

      2 users thanked author for this post.
      • #227563

        Serious hacking is mostly done by state actors. … Organised crime is the next on the list when it comes to hacking on a large scale.

        Of course, the only real difference between a state and organized crime is that the state has declared itself legitimate, and most try to at least appear so.

        I am convinced that hacking vehicles in operation is already happening on occasion. If you have the resources, an “accident” would be a clean way to get rid of someone who threatens your organization.

        4 users thanked author for this post.
        • #227603

          Roads, highways, cities and towns were not designed for driver-less vehicles. It is not just a matter of planning a new infrastructure to accommodate the changes, if that is happening at all, but also a matter of training the human to adjust to their new role – a safety switch! If things get out of hand they are supposed to spring into action and take control of the vehicle. Unfortunately, a hacked vehicle in motion could be totally void of all manual control, e.g. brakes, steering, ignition.

          A driver-less big rig seems out of the question, but I have seen video of it. Hacking a big rig would be like weaponizing it. These big rigs will have a human in the cab, but I question how alert that person will be having nothing to do for hours on end. Highway scenery is not stimulating.

          This AI test gets discussed a lot …
          -Given the binary choice of hitting one person or a group of people (and not being able to swerve or stop on time), what will the AI’s programming tell it to do?

          3 users thanked author for this post.
          • #227614

            There is always the assumption that, when things get suddenly too dangerous and quick maneuvering could avoid them becoming outright accidents, there will be a driver sitting in front of the car controls that will immediately spring into efficacious action the moment that this occasional driver sees the danger, or hears a warning tone, and will do the right maneuvering, starting from a totally relaxed and distracted condition after a long time on the road without any need for taking the slightest action on his or her part.

            How likely is that? You just try it (on a stationary car or on a very slowly moving one, somewhere where there is no traffic around…)

            Ex-Windows user (Win. 98, XP, 7); since mid-2017 using also macOS. Presently on Monterey 12.15 & sometimes running also Linux (Mint).

            MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
            Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
            macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV

            3 users thanked author for this post.
            • #227661

              Therefore the proponents of driver-less vehicles have made a grossly incorrect assumption.

              As far as that question on AI programming, I think the insurance companies would insist the vehicle hit one person rather than several, even if that one person is a child or a pregnant woman (cheaper payout – it is all about the money).

              1 user thanked author for this post.
    • #227551

      FOB relay hack attack

      https://m.youtube.com/watch?v=D_3lgxMwrWI%5D

      Edit to remove HTML. Please use the “Text” tab in the entry box when you copy/paste.

      1 user thanked author for this post.
    • #227557

      This is the kind of stuff that really frightens me and I can’t really understand why we’d want to make computers do everything for us. What happens when the tech fails? Will we even know what to do anymore? I don’t think Tech should replace humans, but assist us. Eventually being forced to hand full control of my vehicle over to some computer is unacceptable to me. A road full of AI controlled cars is a disaster waiting to happen. I think there will be way more driving fatalities than there are now especially accidents involving large numbers of cars and fatalities.

      Hacking is just one way this would happen and, IMO, makes the people in this country less secure than they’ve ever been because then, you don’t need guns or missiles. All you need is a computer and a bit of hacker know how to kill hundreds of people. Thousands, millions? Just hack in and tell the cars to floor it and then steer hard to one side at 70mph or something. All of a sudden, a regular guy with a computer can be a murderer without even leaving their house or getting dressed. I’d say that puts people at FAR more risk than ever. Not only that, but it makes humans dumber VIA laziness because we can’t be bothered to drive our own cars anymore.

      Having a small amount of those cars on the road is okay. I actually think driverless cars would be great for people who are either disabled or too old to drive safely anymore as it allows those people to maintain a degree of independence in their lives without feeling guilty for burdening others. For the average Joe that is perfectly capable of driving themselves? No.. I can’t go along with that.

      If people are really concerned about safety with human drivers, then perhaps we should look at our own driver’s education system. It’s a joke. Drive around a parking lot the size of a mini golf course through some cones and answer a handful of multiple choice questions and you get a permit? Then, drive around with an instructor a few times and tolerate him/her pushing the passenger side brake pedal while you’re driving, play along and watch some educational videos and you get your license? Seriously? We wonder why we have so many bad drivers on the road? We taught them nothing and just threw them out there to learn the rest on their own.

      I think driving school should last a year and drivers should gain experience driving in all weather conditions that they will encounter where they live. Rain, snow, ice, fog and all that as well as learning counter steering skills and evasive maneuvers to avoid accidents as well as just basic skills. They should be taught how to change a tire, check their oil and do basic maintenance. Driving school in this country is a joke, so let’s try taking that more seriously before we declare that humans can’t do better and turn even more of our lives over to tech.

      I honestly don’t trust the tech world enough to give it that much control. I hope it never happens because someone will abuse it and you’ll have enough people who will float that “well it has to be this way, so no use fighting it” line and enough people will buy it. I just don’t think we’re ready for this. I would much rather humans get better at things with the assistance of technology instead of throwing our arms up and saying “I give up!” and turning it all over to AI. That’s what driverless cars are to me.

      5 users thanked author for this post.
    • #227560

      If you like self-driving cars you’ll love self-piloting aircraft…..especially Airbus pax liners where the computers “protect the aircraft” from the pilot’s “mistakes”.

      Here’s just one example where a pilot found himself in a “knife fight with the computer”. He finally won.

      When ‘psycho’ automation left this pilot powerless

      https://www.youtube.com/watch?v=2cSh_Wo_mcY

      2 users thanked author for this post.
      • #227564

        Having worked on “mission critical” systems in my career, I know that there are ALWAYS limits to what an engineer can anticipate when designing a system. The more complex the system, the more opportunity for blind spots. It’s a fact of life.

        At every level one must try to keep in mind, “fail safe”. I’m here to tell you, that’s impossible. We engineers are not gods.

        Without putting too much thought into it you might imagine that “keeping an airplane in its flight envelope by pulling the nose down” or “pulling over to the side of the road” might be a sensible design. And mostly it is.

        In HiFlyer’s linked video, a sensor (or group of sensors) failed. Clearly in that case the computer system’s designers didn’t anticipate that such a combination of failures could occur. There was no code in place that said, “just milliseconds ago the plane was flying nominally and level, and now the nose is suddenly ‘up 50 degrees’, so disengage and tell the pilots they need to take over. Why not? Because it’s virtually IMPOSSIBLE to anticipate every possibility, especially self-failure of the system itself. It’s a bit like suddenly developing dementia, while in a position to control people’s lives. Ever meet a demented person who knew or was willing to admit they had lost their facilities? All the ones I know (some in the family) think there’s nothing wrong with them.

        But let’s say your self-driving car actually DOES have code that says, essentially, “Whoa, suddenly I’m not getting enough input from the real world, so I need to stop!” Would you think it prudent to just stop suddenly in the middle of a freeway? Or pull over to the side of a road with cliffs on both sides? How well tested do you think such code is going to be?

        Take it from a career software / quality engineer: We are not NEARLY to where we ought to be considering putting our very lives in the hands of technology. Note that I did not say “Technology is not NEARLY…” – I said “We”. Digital systems can actually achieve perfection, and make good decisions based on good data far faster and more consistently than people. The problem lies with the fact that we humans are building this stuff, and we are tremendously imperfect.

        We might be influenced by someone saying things like “Get this done by the end of the week or you’re fired” or thoughts like “I’m starting that new job on Monday, so **** this” or even “I don’t feel good today.” In my experience EVERYONE is influenced to do less than perfect work virtually ALL THE TIME by SOMETHING or SOMEONE.

        -Noel

        9 users thanked author for this post.
        • #227592

          Without putting too much thought into it you might imagine that “keeping an airplane in its flight envelope by pulling the nose down” or “pulling over to the side of the road” might be a sensible design. And mostly it is.

          Aircraft analogies are fitting when it comes to autonomous cars, since aviation is currently the home of the most sophisticated vehicular AI in actual service use.

          I’m not a pilot, but I am an aviation enthusiast, and I’ve watched nearly every aviation video I can find, including the full “Mayday/Air Crash Investigation” series.  There have been more than a few times that an onboard computer tried to outthink the pilot with disastrous effect.  Air Inter Flight 148 on January 20, 1992 springs to mind.  The Airbus A320 hit a mountain on approach to Strasbourg, killing most on board.

          The pilot had inadvertently selected a descent rate of 3300 ft/min when he intended to select 3.3 degrees, since he’d forgotten to set the mode first, and in both cases the display on the autopilot appeared to read “33” at a glance (the decimal point and the unit description are easy to miss), and that resulted in an exceptionally rapid descent.  Even so, simulations showed the aircraft missing the mountain even at 3300 ft/min, which would have put the runway in sight of the aircraft.  It turns out that there was a tiny bit of turbulence that caused the aircraft to bounce up a little in altitude right at the moment the 3300 ft/min descent was selected.  The computer read this as a sign that there was an emergency in progress, and it took it upon itself to descend at an even faster rate than the pilot had commanded.  Had the plane done what it was told, the plane would nearly certainly have missed the mountain, even with the pilot’s mistake.  Even if the pilot was aware of this obscure “safety” feature in the plane, he would probably not have been aware it had been triggered by turbulence like he’s felt a thousand times before.

          There are two different philosophies in effect at each of the world’s two biggest airliner manufacturers.  Boeing believes that automation can assist the pilot, but the pilot is the boss.  Airbus believes that since most air crashes can be traced back to pilot error, the aircraft should override the pilot when it thinks he’s making an error.  You can probably guess which philosophy I agree with and which I find absolutely terrifying.  The computer cannot think or reason; it can only react in deterministic ways according to what it thinks it knows via its sensors.  It doesn’t have the intellect to recognize when it is obviously wrong if it hasn’t been programmed for that exact contingency.

          I find the push toward driverless cars to be disturbing. It’s an easier sell in some ways than even an Airbus-level of control, given that airline pilots are nearly always consummate professionals while drivers are… well, not, but the complexity of software required to drive a car without having an airliner-style “Oh no, I don’t know how to handle this! Take over, human!” function makes the odds of dangerous bugs far too likely for my comfort.

          My car doesn’t get updates. It’s not all mechanical (carb/points/condenser), but changing the program in the ECU (engine control unit) means having to physically attach a ROM chip with the new program in it to the ECU itself.  It has no flash capability, and has been running the same old program since the Bush administration (and I don’t mean Dubya).

          Even if it is changed with a new chip, all it can change is the idle, the spark advance curve, and the fuel injector duty cycle.  It doesn’t control anything else, including the throttle position.  I was disturbed by the rise of throttle-by-wire systems, particularly when mated with automatic transmissions.  If there is a bug in the system somewhere and the thing decides to “floor it” of its own volition, it can do that!  And with “modern” push-button start cars, you can’t easily turn it off in motion; it won’t allow it at speed, because that would be unsafe (lose power steering and such).  Most of the time, yes, but if the car has decided that it saw the checkered flag and it’s wide open, I’m turning it off.

          During that spate of unintended acceleration episodes with Toyota Priuses, one of the reports was that a panicked driver or passenger in a Prius that was running away had called 911 while it was happening, but could not turn off the car as described above.  I would imagine it would have a long-hold force-off mode like a PC, but I’ve never actually had a car like that, so I don’t really know.  It’s apparently not something the driver thought to do in the panicked state, and the car went on to crash.

          Whatever the cause of those incidents, whether it was truly the floormats or the throttle pedal at fault or something else, the idea of a car that I can’t definitively force off in a split second if I want to would unnerve me.  I prefer to also have a positive, always-works way to disconnect engine from transmission at the press of a pedal in addition to that.  I don’t inherently trust machines very much– I have too much experience with them for that.

          Dell XPS 13/9310, i5-1135G7/16GB, KDE Neon 6.2
          XPG Xenia 15, i7-9750H/32GB & GTX1660ti, Kubuntu 24.04
          Acer Swift Go 14, i5-1335U/16GB, Kubuntu 24.04 (and Win 11)

          7 users thanked author for this post.
          • #227636

            My 2001 Honda developed a scary problem last year, the cruise control resume/accelerate button got stuck in the on position, but there was no indication other than unstoppable acceleration. Pulling up on that button did nothing, it was already up. Hitting the brakes did nothing, it resumed and accelerated. Pulling up on the gas peddle was also pointless. Topping that off, the cruise control indicator light burned out. The only solution was to repeatedly hit the cruise control cancel button until hearing a quiet thunk noise which released the resume/accelerate.

            I’m lucky I lived through the first time, yet stupidly tried it one more time on a long boring drive and didn’t panic quite as much. I don’t use it anymore, not sure if the cancel button will work next time.

            Even cruise control is AI of sorts and I no longer trust that limited function 100%, whether it’s my heavily used car developing a mechanical malfunction or a brand new model. Having no control over a floored gas peddle scared [] me. However I do like anti-lock brakes, could have used those on motorcycles more than once.

            3 users thanked author for this post.
            • #227743

              I’ve thought about the cruise control being a potential threat before, though yours is the first concrete example I’ve heard of such a thing happening.

              My car does have cruise control, and I use it on all longer trips.  I’m not afraid of it, though, for a specific reason (other than having a clutch pedal that will positively stop acceleration if I say so), and that’s that it is a vacuum-operated type, and there’s a mechanical vacuum switch that dumps the vacuum if the brake or clutch pedal (not sure which one… I know in the automatics it is on the brake, ’cause there’s nowhere else, but I think it is on the clutch on manuals) is not in the fully-up position.  The little computer in the cruise control can tell it to speed up all it wants, but it can’t do anything without vacuum.

              In order for my cruise control to put me in danger, there would have to be two failures at the same moment.  The cruise control would have to refuse to disengage AND the vacuum dump switch would have to hold vacuum uncommanded for it to be a risk.  And, of course, I have a clutch pedal and an older style key switch that will cut power to all engine systems, which would be a last resort.

              Also, the design of the cruise buttons does not lend itself to the kind of issue you had.  The ON button is only an ON button.  If it gets stuck, then I suppose I can’t turn it off, but I can still hit the cancel button (or the brake pedal; the brake light switch also sends the cancel signal) to cause cruise to be inactive even though it is on. If the Accel button got stuck, I could just press the OFF button.

              An on button that is the same as Accel would be potentially dangerous, and it sounds like that is what you had.  Glad you were unharmed… but that’s why I don’t trust machines that much.

              Dell XPS 13/9310, i5-1135G7/16GB, KDE Neon 6.2
              XPG Xenia 15, i7-9750H/32GB & GTX1660ti, Kubuntu 24.04
              Acer Swift Go 14, i5-1335U/16GB, Kubuntu 24.04 (and Win 11)

          • #227945

            “I don’t inherently trust machines very much– I have too much experience with them for that.”

            Me 2.


            @Ascaris
            , You may have an ECU that isn’t flashable, modern ECU’s are, it’s how we (collectively) remap the control unit. Remapping offers very small benefits on cars that are not equipped with a turbocharger though. Hardware changes are the best route for them.

            I’m with you, I don’t care to be a hostage (or captive audience) in my vehicle either. If you looked into the Arizona incident (video is out there) it’s almost impossible for even human reflexes to have prevented it. I’m not against self driven vehicles but I am leery of them for now.

        • #227594

          Manual override. Fail-safe. These are activated under certain conditions.

          On permanent hiatus {with backup and coffee}
          offline▸ Win10Pro 2004.19041.572 x64 i3-3220 RAM8GB HDD Firefox83.0b3 WindowsDefender
          offline▸ Acer TravelMate P215-52 RAM8GB Win11Pro 22H2.22621.1265 x64 i5-10210U SSD Firefox106.0 MicrosoftDefender
          online▸ Win11Pro 22H2.22621.1992 x64 i5-9400 RAM16GB HDD Firefox116.0b3 MicrosoftDefender
          • #227689

            And they can fail under a few circumstances. The key element here is what is the risk of this happening? Very, very low, it turns out.

            -- rc primak

            • #227722

              rc I enjoy your writing.

              But any nonzero value is reason enough to leave this on the drawing board and out of production. I would never want to be the engineer that can’t sleep because an innocent death directly resulted from my work. No failure rate above zero is excusable.

            • #227876

              Sorry. Zero risk never exists in the real world. We can reduce risks to near-vanishing, but zero risk is as ridiculous as when one Senator once said about Acid Rain:
              “Why can’t we reduce the pH to zero?”

              You need to understand the science and engineering involved to see the humor in that, and in the impossible demand that AI be 100% infallible before we can “risk” releasing it into the wild.

              This is also the flaw in the arguments by some folks that GMO crops are “too risky” to be released into the field or consumed by people. I haven’t heard any stories of people being turned into monsters by GMO corn.

              -- rc primak

            • #227950

              Sticking to the subject of AI piloting missiles on public roadways, rather than following your non sequitur misdirections.

              Accidents will happen and cannot be eliminated by human reflex nor AI. But a human can be held responsible for the consequences of their own actions or lack of actions. AI makers wish to foist that responsibility onto a human that was not even in control at the already passed moment that could have avoided the impending collision. That is scapegoating at the last moment.

              That is a failure that can be avoided by keeping control under the properly liable entity on board, that is a living breathing human. No AI can cause an accident if no AI is given control. That is a zero result, and very easy to attain.

              1 user thanked author for this post.
      • #227619

        Ai also crashed an Airbus military transport for Spain undergoing a test flight. At takeoff the computer said the engines were over revving and shut them down to idle. Killed all onboard including the Airbus tech people.

        2 users thanked author for this post.
    • #227559

      Hey, I’d pay good money for a gadget that would hack and temporarily disable/limp a selected nearby car. It would be great for discouraging illegal left-lane driving and other obnoxious and inconsiderate behaviors.

      😉

      1 user thanked author for this post.
      • #227690

        In my observations, the rude or crazy behaviors almost always happen in ways where the last thing in the world I would want to do to the offending vehicle is to slow or stop it. Far more dangerous than just letting the idiot pass.

        -- rc primak

        1 user thanked author for this post.
    • #227600

      First, let me spell this out up front: I think that both self-driving cars and the Internet of Things are very bad ideas being pushed primarily by people with $$$ signs on their eyes (“imagine that potential huge market out there, and us being the first to go into it and start growing our business from the get go!!!”).

      Or, to be more precise, they are really bad for the world of today, with today’s technology. Maybe, several decades from now, things will be different, but where we are today is, in my opinion, not too different from what it was back in the 19th Century with trains. Great idea, until it started killing people standing in the tracks and looking the wrong way (as it actually happened in the UK during the inaugural run of one of the earlier steam locomotives, the “Rocket”), to derailments claiming scores of deaths, railway bridges collapsing because no one had yet the expertise needed to design and, or build them sturdy enough for their purpose. And so, catastrophe after catastrophe, the railway technology has evolved to its (still improving through occasional hard knocks, even if much, much better) safety of today.

       

      Ex-Windows user (Win. 98, XP, 7); since mid-2017 using also macOS. Presently on Monterey 12.15 & sometimes running also Linux (Mint).

      MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
      Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
      macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV

      2 users thanked author for this post.
      • #227691

        The AI being developed today won’t be widespread for nearly a decade, maybe more. People keep their cars, even on leases, for about three to five years, often much longer. So the transition from human-driven cars to AI driven cars will take place in tandem with roadway and infrastructure upgrades, which will make the new technologies much more acceptable to most people. And given that fleet cars and public transit will be the first to receive AI upgrades, probably most of us will never encounter our own self-driving cars before 2025 at the earliest. By then we will have taken hundreds of rides in self-driven vehicles, sometimes without even knowing it.

        -- rc primak

        • #227756

          rcPrimak: It would be great if a coherent and gradual introduction of some measure of automated driving were to actually occur over the next six or seven years, as you expect. Looking at the world that I can see through my own eyes and those of  the several newspapers and journal writers I read, I really doubt that it will be that way.

          I do not doubt that the huge amounts of money and of work paid by such money that have been and are being thrown at the “driverless car” and “Internet of Things” has resulted and will result in some spinoff developments of importance in: computing, robotics, remote-sensing and communications. What I very much doubt is that the coming progress, given another few years, will get us to the point where things are as you expect.

          I have been in the business, mainly but not exclusively, of developing, testing and coming up with means of positioning fixed or moving objects at various levels of precision, but mainly at within between one foot and a couple of inches, depending on application, using GPS. As that is an important component of the technology needed to drive autonomous and semi-autonomous vehicles on water, the air, outer space and, yes, streets and roads, I have been hearing about “intelligent” vehicles for over a quarter of a century, and discussing them, on and off, with other specialists, for about that long. So I have not come to the opinions that have presented here in a casual way.

          A couple of years ago I read an article in “Scientific American” that discussed the levels of “driverless-ness” of cars, big trucks and other vehicles and heavy moving equipment. I fully shared the conclusions of the authors concerning “driverless” cars. They considered the topic split into that of those vehicles that would need a driver ready to take control of in some emergency, when the automated system does not respond appropriately, and of those that will be fully automatic. For the latter, the likely use, according to the article, would be most likely limited, for many years to come, to closely monitored and largely restricted trajectories, such as equipment for moving parts from point A to point B inside warehouses, or for moving containers inside industrial harbors, or heavy agricultural machinery working large fields largely under autonomous control (already happening). Or for driverless cars using only single-use lanes where no other traffic is, not only not allowed, but controlled so it is largely physically impossible for other that automated vehicles to get into. Or trucks and buses and other heavy vehicles being monitored remotely when on common roads, much as other kinds of drones are being monitored these days. Military ones, for example.

          As to largely self-driven, but with a backup human driver able to take over, overruling the automated drive at a moments’ notice, the really important questions would be: Is the driver awake all the time and alert, during a long night drive? Is the driver drunk or high on drugs? Is the driver watching videos or playing computer games or texting on a cellphone at the time? Or in a tender embrace with someone on the passenger seat? Even if the answer to all those is “No”, there remain this one: Can anybody in normal physical shape go  from a mental state achieved after hours of relaxing and doing nothing of the kind, to suddenly becoming able to recognize an imminent danger, move fast and take over from the automated drive to perform the necessary fast maneuvers to avoid an accident of potentially serious or even  fatal consequences?

          When those answers are in, the conclusion is likely to be a pessimistic one concerning the viability of self-driven cars out and about on roads and streets, or anywhere, particularly if the level of infrastructure development remains at its present languid and underfunded one. And bringing to market self-drive capabilities to cars first and to big trucks second, have been all along the “drivers” of this big push to bring them to market in a few years from now. That has been so for a quarter century, and probably will still be so for quite a few more years yet, in my opinion.

          I have not gotten to the IoT yet, so let me say just this: either than over-the-Internet one-way control of things such as turning lights on and off from one’s cellphone to make the house you are currently away from look like you are in, and where the gadgets in question do not send data back and are not connected to exchange information with anything else, I see little benefit and very big problems from remote-control of gadgets over the Internet, let alone having gadgets that can access the internet on their own.

           

          Ex-Windows user (Win. 98, XP, 7); since mid-2017 using also macOS. Presently on Monterey 12.15 & sometimes running also Linux (Mint).

          MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
          Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
          macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV

          • #227879

            Can anybody in normal physical shape go from a mental state achieved after hours of relaxing and doing nothing of the kind, to suddenly becoming able to recognize an imminent danger, move fast and take over from the automated drive to perform the necessary fast maneuvers to avoid an accident of potentially serious or even fatal consequences?

            That requires special training. But police, firefighters and other First Responders, not to mention military personnel, do this daily, and usually admirably well. People being impaired or inattentive is already the cause of most of the more serious crashes on roadways.

            So I expect this will also be the case in the rare instance of a failure in a self-driving vehicle. As long as the tech performs extremely well where and when there are no human lives at risk, I don’t see much difference when self-driving vehicles enter the mainstream. As long as human override is available and easy to use, I see little to no additional risk. And there would be a sharp reduction in the existing human-error risks which cause most of today’s serious crashes.

            Risk will never be zero. But the risks of AI driven cars vs. human driven cars can be turned in favor of AI with human override.

            I have not gotten to the IoT yet, so let me say just this: either than over-the-Internet one-way control of things such as turning lights on and off from one’s cellphone to make the house you are currently away from look like you are in, and where the gadgets in question do not send data back and are not connected to exchange information with anything else, I see little benefit and very big problems from remote-control of gadgets over the Internet, let alone having gadgets that can access the internet on their own.

            IoT is two-way when we are talking about self-driving cars. And a lot of processing is not over the Internet, but is Edge Computing. That is, to avoid latency issues, a lot of processing and responding happens onboard or at nodes on the edge of the network. It’s not centralized processing — it can’t be.

            Edge computing and IoT 2018 – when intelligence moves to the edge
            https://www.i-scoop.eu/internet-of-things-guide/edge-computing-iot/

            -- rc primak

    • #227617

      Two words: Overreaching nonsense.

      1 user thanked author for this post.
    • #227631

      Auto-updating via WiFi should be banned.

      Devices need a mechanical switch that must be depressed (at the correct time) to allow updates to be installed (e.g. BIOS, IoT, etc.).

      -lehnerus2000

      3 users thanked author for this post.
      • #227638

        Maybe cars will shut off and restart in the middle of your commute to install the latest and greatest patches. All for your safety, of course, because you can’t be trusted to do it on your own and also it simplifies their engineering, not having to deal with a variety of patch selections among pesky customers.

        [sarcasm OFF]

         

         

        3 users thanked author for this post.
      • #227692

        Better yet, a digital and mechanical  key (signature and shape have to match) only provided to qualified repair shop personnel. Although, I would like for these keys to be made available to independent auto mechanics, not just dealership shops. Not perfect protection, but better than OTA updating, even with digital signing.

        -- rc primak

        2 users thanked author for this post.
      • #227748

        PC motherboards used to have a jumper that had to be installed in order for the flash to be possible.  I wish they still had this… if OEMs are concerned that businesses won’t like it because it will be too much work to update the firmware, just leave the jumpers in the default “on” position, but still accessible if the case is open.  Those of us who want the extra protection can remove the jumper (on a desktop, I would just move it over so it is only on one pin.  If it were a laptop, I would be afraid it would be jarred loose, so I would just remove it completely).

        Dell XPS 13/9310, i5-1135G7/16GB, KDE Neon 6.2
        XPG Xenia 15, i7-9750H/32GB & GTX1660ti, Kubuntu 24.04
        Acer Swift Go 14, i5-1335U/16GB, Kubuntu 24.04 (and Win 11)

        2 users thanked author for this post.
        • #227880

          In some Chromebooks, there’s an internal switch which requires partial removal of a case element. Then and only then the firmware can be put into write-enabled mode. Toggling the switch back and replacing the case element places the device back into write-protected mode. It’s a pain to do physically, but this shows that such a toggle is possible even on laptop form-factor hardware.

          Totally irrelevant in cars, where the possible access door can be as large as we wish. Just as long as it’s a physical toggle or physical key interface, not something which can be flipped over the air.

          -- rc primak

    • #227680

      Re: @anon #227603

      “These big rigs will have a human in the cab, but I question how alert that person will be having nothing to do for hours on end.”

      NW airlines overflys destination 150 miles – Did not answer FAA for over an hour

      https://www.youtube.com/watch?v=7_jBmo_TIgM

      Re: @Ascaris #227592

      “And with “modern” push-button start cars, you can’t easily turn it off in motion”

      Or in the garage apparently.

      https://www.techtimes.com/articles/227751/20180515/keyless-cars-are-killing-people-due-to-carbon-monoxide-poisoning.htm

      Re: @OscarCP #227614

      “…a driver sitting in front of the car controls that will immediately spring into efficacious action the moment that this occasional driver sees the danger, or hears a warning tone…”

      https://www.theguardian.com/technology/2018/mar/22/video-released-of-uber-self-driving-crash-that-killed-woman-in-arizona

      with a paid driver on board for the test.

      6 users thanked author for this post.
    • #227693

      A few random observations about AI, driverless vehicles, human overrides (failsafes) and IoT security, based on various posts throughout this thread:

      Prevalence of Driverless Technology Hacking and Who Is Hacking Vehicles:

      Serious hacking is mostly done by state actors.

      Don’t depend on it. The “911” attacks in the US were not the work of State Actors.

      I am convinced that hacking vehicles in operation is already happening on occasion.

      Exactly what the FBI and other government agencies want us to believe. But where’s the proof?

      Human Judgment and Human Education:

      I don’t think Tech should replace humans, but assist us.

      Agreed.

      Having a small amount of those cars on the road is okay.

      Actually the accidents so far indicate that a roadway with a mix of driverless and human-driven cars is more dangerous than a roadway full of only driverless cars. AI can’t yet predict and respond to human errors adequately.

      This AI test gets discussed a lot …
      -Given the binary choice of hitting one person or a group of people (and not being able to swerve or stop on time), what will the AI’s programming tell it to do?

      And what would a human do? Would you make the “correct” decision? Also, sheer numbers aren’t all AI can consider. Like the grim calculus of evaluating the worth of lives and injuries in plane crashes, one person’s worth may be many times the worth of another person or group of people. AI knows all these actuarial statistics. That’s the scary part to me.

      Truth be told, there have already been examples of AI making driverless cars go through intersections and accelerate to avoid hitting other traffic which has run a red light. The results were better than when humans are put into a similar simulated situation. We almost always apply the brakes — sometimes with disastrous results.

      I think driving school should last a year and drivers should gain experience driving in all weather conditions that they will encounter where they live.

      This is already provided for in some States. We in Massachusetts have Graduated Licenses and required supervised numbers of hours behind the wheel before a Full License is granted.

      (This is controversial from a privacy and legal due process point of view:) Some insurance companies also offer discounts to encourage parents to install “black box” monitors into cars which report on the behind the wheel behaviors of young drivers. There are other groups of drivers who also may be offered this option to avoid much higher insurance premiums.

      Right To Repair:

      (Various posts above.)

      Nothing in the current implementation of driverless cars prevents manual takeover or self-maintenance of things like tire pressure and oil changes.

      Failsafes:

      During that spate of unintended acceleration episodes with Toyota Priuses, one of the reports was that a panicked driver or passenger in a Prius that was running away had called 911 while it was happening, but could not turn off the car as described above.

      Those Prius models are supposed to have a “hold until disengage” function. It does not appear that in any of the crashes the drivers did the procedure correctly. That’s either poor documentation or drivers who never RTFM. Education failed, not the technology itself. That is, except where the accelerator pedals actually stuck physically. Which did happen a few times, hence the recalls. That was not a computer issue — it was a mechanical issue with conventional parts.

      the cruise control resume/accelerate button got stuck in the on position, but there was no indication other than unstoppable acceleration.

      That is a computer issue, and one more often found in third-party aftermarket cruise control units. Even factory-installed units can fail, with exactly that consequence. The problem there is indeed poor design from the point of view of positive manual override. This is not what seems to have happened in the Toyota issue.

      -- rc primak

      • #227750

        Those Prius models are supposed to have a “hold until disengage” function. It does not appear that in any of the crashes the drivers did the procedure correctly. That’s either poor documentation or drivers who never RTFM.

        That’s true, but under stress, people don’t tend to think clearly and remember something they’ve read in a manual.  They revert to habit, and the habit is to press and release to turn off the engine, and this time, it didn’t work.  The good old keyswitches are not prone to that… the same action the driver has done a thousand times to make the engine turn off is the correct one to force it off if it ever becomes necessary.  There’s no ifs, ands, or buts about cutting all the power to the ECU, the ignition, the fuel pump, and the fuel injectors– the engine will turn off.

        Dell XPS 13/9310, i5-1135G7/16GB, KDE Neon 6.2
        XPG Xenia 15, i7-9750H/32GB & GTX1660ti, Kubuntu 24.04
        Acer Swift Go 14, i5-1335U/16GB, Kubuntu 24.04 (and Win 11)

        2 users thanked author for this post.
      • #227760

        “The “911” attacks in the US were not the work of State Actors.”
        – excuse me but 911 was a terrorist attack … executed by hijacking planes.

        The statement made in the post from which you took this quote is in reference to the hacking of software by government sponsored hackers. In other words, state actors. You missed the point totally.

        • #227828

          My point is, it doesn’t take a State Actor to execute a terrorist attack. A hack can be programmed and executed by a stateless group, a domestic group, or even by an individual.

          -- rc primak

      • #227765

        Nothing in the current implementation of driverless cars prevents manual takeover or self-maintenance of things like tire pressure and oil changes. ”

        Concerning manual takeover: Nothing can prevent it, except the driver’s condition, attitude and readiness: see my longish post further up. This is precisely what is so unrealistic about current expectations of “driverless but with a human ready to take control” cars, trucks, etc. that are expressed so very optimistically and so repeatedly by so many.

        Ex-Windows user (Win. 98, XP, 7); since mid-2017 using also macOS. Presently on Monterey 12.15 & sometimes running also Linux (Mint).

        MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
        Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
        macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV

        1 user thanked author for this post.
        • #227815

          Right now they’re all prototypes, so we can only speculate what the final product would look like, but it is possible that the idea would be to eliminate the driver completely, and therefore the controls the driver would have to use to take over.  Even in aircraft, the goal has never been to engineer the pilot completely out of the picture, so asking the pilot to take over at a moment of confusion is more reasonable than asking a “driver” who has been texting or using Facebook for 20 minutes to take over seconds before an accident.

          Even with the only semi-autonomous airliners, there have been a number of times that things have happened because the pilot “let the plane get ahead of him,” and the general deterioration of hand-flying skills by lack of practice is frequently cited as another problem.  Drivers are bad enough now even knowing that the car can’t drive itself.

          Of course, driving a car is a lot less complex than flying a plane, but re-establishing situational awareness at the worst possible time still takes precious seconds that may not exist in an emergency.

          Dell XPS 13/9310, i5-1135G7/16GB, KDE Neon 6.2
          XPG Xenia 15, i7-9750H/32GB & GTX1660ti, Kubuntu 24.04
          Acer Swift Go 14, i5-1335U/16GB, Kubuntu 24.04 (and Win 11)

      • #227768

        And what would a human do? Would you make the “correct” decision? Also, sheer numbers aren’t all AI can consider. Like the grim calculus of evaluating the worth of lives and injuries in plane crashes, one person’s worth may be many times the worth of another person or group.

        The point is really one of who’s morally as well as legally responsible here for their actions: the driver or the person or persons that programmed the “AI” (more likely a hybrid of AI and Expert System)?

        Ex-Windows user (Win. 98, XP, 7); since mid-2017 using also macOS. Presently on Monterey 12.15 & sometimes running also Linux (Mint).

        MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
        Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
        macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV

    • #227758

      I am convinced that hacking vehicles in operation is already happening on occasion.

      Exactly what the FBI and other government agencies want us to believe. But where’s the proof?

      Here, here, here and many others mostly involving hacking into the car’s ECU. It’s fairly common for cars to be operated remotely VIA keychain remotes especially these days. Some cars can even be operated with smart phones and an app. Hacking cars is possible for someone who knows what they are doing. This will only get more sophisticated as hackers are always one step ahead of the patchers and no software can be hack proof.. at least, not that I am aware of.

      1 user thanked author for this post.
      • #227827

        And yet, in the real world, no crashes on the highway which can be traced to hacked cars. Still nothing but Proof of Concept and news story stunts. All your examples were staged stunts.

        -- rc primak

        • #227849

          And yet, in the real world, no crashes on the highway which can be traced to hacked cars. Still nothing but Proof of Concept and news story stunts. All your examples were staged stunts.

          A successful hack attack wouldn’t be obvious; you’ll just have a dead driver in a “typical” accident.

          If I help you cross the median and meet an oncoming truck, will anyone ever consider that you might not have been attempting suicide?

          1 user thanked author for this post.
    • #227854

      The risk of cars…to me… is no different than the risk of the internet of things.

      I don’t want my car to get hacked while I’m barreling down the highway at 70 MPH. That danger doesn’t exist with any other IoT device. Also, if someone disables my car remotely, I’m stuck; I can’t go anywhere. And my car is an expensive IoT device, unlike my TV or thermostat, which are far cheaper.

      Group "L" (Linux Mint)
      with Windows 10 running in a remote session on my file server
      • #227883

        That danger doesn’t exist with any other IoT device.

        Internet of Things also includes remote monitoring of natural gas supply networks in some areas. (Re. the Merrimack Valley/ Columbia Gas, which was doing their servicing under remote control from a center in Columbus, Ohio with no way to shut off excessive pressure from the control center:) Tell us in Mass. that remote control can’t fail in a way which can kill people or leave communities without heat or hot water or cooking gas for months.

        You don’t have to be going 70 mph on a highway to be killed by poor implementation of IoT technologies.

        -- rc primak

        1 user thanked author for this post.
        • #227888

          RC, first you tell us not to worry because there are no real-world examples. Now you correct my IoT statement by giving a real-world example.

          When I spoke of “any other IoT device”, I meant IoT devices that a regular consumer would have.

          Group "L" (Linux Mint)
          with Windows 10 running in a remote session on my file server
          • #228472

            I was responding to the absolute statement about “any other IoT device”. There are lots of IoT devices, and they operate in very different environments.

            There is really no difference in how the risks arise. But there are different IoT environments, and (for me) different levels of risk tolerance.

            IoT Things operate the same way, whether they are a coffee maker, a warehouse or a public utility. The difference is that there are places where I would not want even a low risk to materialize. I do not classify cars on a roadway as a zero risk tolerance scenario. But I do want greater reliability and security in a car than I need in a coffee maker.

            I never said there is zero risk in IoT. I said, and I still say, car hacking stunts have so far never shown anything which has happened on a real highway to an actual vehicle. It’s all carnival sideshow hype and hoopla. That does not mean there isn’t a risk. It just means the risk is very, very low.

            The places where even such low risks are unacceptable to me are few, but public utilities are among those few cases where even the rare failure can have devastating consequences. That’s not about driverless cars. It’s about an entirely different IoT environment.

            The question I was responding to in regard to the gas distribution system was what I read as a very general and sweeping statement about IoT, not a specific post about driverless cars.

            I do make distinctions among the various types of IoT environments. And hence, I do have different levels of risk tolerance depending on the specific IoT environment we’re talking about.

            -- rc primak

    • #227890

      I can foresee a time when the police will have the ability to remotely shut down your car if they are chasing you, in order to avoid a high-speed chase. They would sell the public on the benefits of this sort of technology, because it would be for “public safety”. The problem is, once you have it, you will never be able to get rid of it. And it is possible that this technology will be abused at a future time.

      Group "L" (Linux Mint)
      with Windows 10 running in a remote session on my file server
      4 users thanked author for this post.
      • #227909

        I would even go so far as to say it’s a near certainty that it will be abused and sooner rather than later if/when this gets implemented on a larger scale. Isn’t that how it works in today’s world? After all, the public is on a need-to-know basis and most of the time, we don’t need to know. The public gets told the side of the story that gets everyone on board, but we don’t get told the other side of it. Wasn’t telemetry supposed to be used for improving software and discovering bugs to be fixed? Things aren’t as they seem to be more times than not. I don’t think we are even close to being mature enough as a species for this kind of tech.

        Also, bugs on an OS can be fixed or we have backups to restore from to mitigate those issues. If such things happen to your car, it may cost you your life and the lives of others. It’s a huge raise of the stakes. I just don’t have the degree of trust necessary to be anything other than dead set against it.

        As another poster above eluded to, if hacking (and crashing) of cars was already happening, we wouldn’t know about it. It most certainly can be done though and it certainly wouldn’t stop at PoC concepts. I’m sure the NSA or someone will develop exploits and it will get leaked to the public or a handful of gifted hackers will develop something. I would not be surprised if they already exist TBH. I do not trust people to do it right and not abuse it.

        We’re already living in a surveillance state. It is, IMO, a foregone conclusion that anything that can be used to expand that agenda will be used to do so and having control of anyone’s car remotely (satellites) anywhere at any time would surely be a valuable tool in such an agenda as well as anyone wishing to act maliciously with malware or ransomware. I don’t think it would make us safer, but less safe as people’s lives are directly at stake and our personal freedoms (the ones we have left anyway..) would also be at risk due to the loss of control over one’s own vehicle at any time for better or worse.

        Maybe a little overboard, but this just seems to be the way of the world anymore. I would love it if we were a mature enough species to have such powerful technology without abusing it, but we are nowhere near that point right now. Not even in the ballpark, in the country or on the planet. Technology is already used to take advantage of and abuse people. It’s quite frightening to think of upping those stakes. We’re just not ready.. not even close. We use tech for exploitation, making money and war. We already abuse pretty much everything.

        3 users thanked author for this post.
    • #229231

      FYI

      “….It’s estimated that there are approximately 21 million Internet-connected cars on the roads today, and researchers predict that by 2020 this number will grow to 200 million. ”

      https://www.automotive-iq.com/autonomous-drive/articles/dangers-connected-cars

       

      1 user thanked author for this post.
      • #229273

        You and I are obviously less informed than the vaunted researchers and their crystal balls. Do you believe there will be 9½ times as many currently in use, in only 14 to 26 more months?

        I do not. Guess I’ll have to write a note to check back on my prediction.

        2 users thanked author for this post.
        • #230381

          And, after that shows how well this works, probably 0% for at least one generation.

          Ex-Windows user (Win. 98, XP, 7); since mid-2017 using also macOS. Presently on Monterey 12.15 & sometimes running also Linux (Mint).

          MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
          Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
          macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV

      • #229969

        That prediction seems to be off by a bit. 2025 maybe, but remember, folks keep a leased car three to five years, and an owned car up to ten or more years. I wonder if that prediction has taken these facts into account?

        If Prius owners are any indication, hybrid and all-electric car owners are likely to hold onto our cars even longer.

        -- rc primak

        • #229974

          I think it is possible @hiflyer dropped his sarcasm tag, or came to a different view later. I cannot find it now, but I recall a post where he called into question the viewpoint – not the credentials – of that article’s main opinion.

          Beyond the extensive useful life of current autos in use is the lack of interest drivers have for surrendering control of their cars while still being legally liable for all outcomes. Until the manufacturer accepts that liability for all moral and economic results from failure, I will not operate nor purchase one of these vehicles. I doubt I am alone in this view.

    • #230363

      VOLKSWAGEN PATCHES PASSAT PATCH

      “Until you get the software update, keep your speed to under 80 mph.”

      https://www.consumer.ftc.gov/blog/2018/11/do-you-have-2012-2013-or-2014-vw-passat-tdi

      • #230394

        I’m being unfair, but the phrase “unsafe at any speed” springs to mind. (ht Ralph Nader) I think that referred to the Corvair design, but will now be forever tied to engineered piloting controls in automobiles.

        Having typed that out, now reassessing the auto- portion of that word. I may be a little slow lately.

    • #230409

      It’s happening now…

      https://www.google.com/amp/s/amp.interestingengineering.com/volvo-cars-and-volvo-trucks-now-communicate-to-improve-safety

      Scandinavian highway authorities get automatic real time reports of when and where vehicle antilock brake systems are activated, so they know where to deploy the snow plows.

      They also know when you have turned on your emergency flashers.

      https://www.media.volvocars.com/us/en-us/media/pressreleases/157065/volvo-cars-puts-1000-test-cars-to-use-scandinavian-cloud-based-project-for-sharing-road-condition-in

       

    Viewing 22 reply threads
    Reply To: Patch Lady – 31 days of Paranoia – Day 26

    You can use BBCodes to format your content.
    Your account can't use all available BBCodes, they will be stripped before saving.

    Your information: