Friday, August 31, 2018
SmartDrivingCar.com/6.37-UGLY_DiDi-083118
37th edition of the 6th year of SmartDrivingCars
China’s Didi suspends Hitch carpool service after the murder of a female passenger
A, Liptak, Aug 26, “On Friday, a driver for Didi Chuxing’s ride-share service Hitch killed a 20-year-old female passenger in the Chinese city of Wenzhou. Following the incident, the company says that it will suspend the ride-hailing service while the company examines its practices. This is the second such death associated with the app this year. In May, a driver raped and murdered his 21-year-old passenger in Henan province, using his father’s account, which prompted the company to suspend its service for six weeks as it made some changes. Female riders had complained that drivers often left inappropriate comments on their profiles after using the service, and the company implemented an emergency button, late-night restrictions, and new verification methods, including facial recognition, to “minimize the risk of unapproved account use.”
This latest incident demonstrates that those new measures weren’t enough. The company says that the while the suspect — identified as Zhong — had no criminal record, submitted his driver’s license and vehicle identification, and passed the new mandated facial recognition test, he was the subject of prior complaints. On Thursday, a female passenger reported him to the company when he asked her to sit in the front seat with him, drove her to an isolated location, and attempted to follow her…. Read more Hmmmm…. Read following…Alain
China’s Didi suspends carpooling service after another female passenger is murdered
J. Russell, Aug 26, “Chinese ride-hailing firm Didi Chuxing, the $60 billion-valued company that bought out Uber’s China business, has suspended its carpooling service after the murder of a female passenger. …” Read more Hmmmm…. Read following…Alain
Didi Suspends Carpooling Service in China After 2nd Passenger Is Killed
S. Wee, Aug 26, “Didi Chuxing, China’s popular ride-sharing service, fired two senior executives and suspended a car-pooling service after the second killing of a female passenger in three months….” Read more Hmmmm…. This is tragic, and and at first I thought it was simply DiDi not vetting its drivers properly and not responding quickly enough to a complaint. But no….
Customers Died. Will That Be a Wake-Up Call for China’s Tech Scene?
L. Yuan, Aug. 29. “Huang Jieli, who ran a Chinese ride-sharing business called Hitch, was invited to a wedding in March. One of her drivers was getting married to a woman who had once been his passenger. Thanks, the invitation said, for getting them hitched. Didi Chuxing, Hitch’s corporate parent and one of the world’s most successful and valuable start-ups, once cheered these stories of young love. Like so many other Chinese internet companies, Didi explored all kinds of ways to bring in new users, including social networking.
So through suggestive ads hinting at hookups through driving, Didi pushed Hitch’s romantic possibilities. In a 2015 interview with the Chinese online portal NetEase, Ms. Huang compared Hitch cars to cafes and bars. …
In the aftermath of the two assaults, Chinese media has uncovered dozens of others over the years. It also found past advertisements for Hitch that featured lewd double entendres and other language that could suggest a female passenger might welcome an advance from her male driver. …” Read more Hmmmm…. This is truly UGLY! Given all of the inter-personal sensitivities associated this with business, both now with gig drivers and later with efforts to encourage ride-sharing in driverless aTaxis, how could such an advertising approach ever see the light of day at any on-demand ride-hailing company. Aren’t there any adults at DiDI? It is bad enough that Uber deliberately tested its Self-driving cars in domains where/when those cars had their AEB turned off. It is way worse to the whole ride-hailing mobility-as-a-service business, when one of its shining stars spends money advertising in such a way. Ride-hailing and ride-sharing are serious businesses that absolutely require simultaneous and complete trust and confidence between what are otherwise complete strangers. Inspiring any desire to engender “ChatRouletteness” should and will lead directly to Blockbuster. This is really UGLY! Alain
Smart Driving Cars Podcast Episode 54
F. Fishkin, Aug 26, “The impact of the Hitch service murders in China on ride sharing, Toyota’s investment in Uber and the issue of who controls data…are the focus of Episode 54 of the Smart Driving Cars podcast. Co-hosts Alain Kornhauser of Princeton University and Fred Fishkin are joined by The Dispatcher publisher Michael Sena.” Hmmmm…. Now you can just say “Alexa, play the Smart Driving Cars podcast!” . Ditto with Siri, and GooglePlay. Alain
Real information every week. Lively discussions with the people who are shaping the future of SmartDrivingCars. Want to become a sustaining sponsor and help us grow the SmartDrivingCars newsletter and podcast? Contact Alain Kornhauser at alaink@princeton.edu! Alain
Waymo’s Big Ambitions Slowed by Tech Trouble
A. Efrati, Aug 28, “HANDLER, Ariz.—Alphabet’s Waymo unit is a worldwide leader in autonomous vehicle development for suburban environments. It has said it would launch a driverless robo-taxi service to suburban Phoenix residents this year. Yet its self-driving minivan prototypes have trouble crossing the T-intersection closest to the company’s Phoenix-area headquarters here.
Two weeks ago, Lisa Hargis, an administrative assistant who works at an office a stone’s throw from Waymo’s vehicle depot, said she nearly hit a Waymo Chrysler Pacifica minivan because it stopped abruptly while making a right turn at the intersection. “Go!” she shouted angrily, she said, after getting stuck in the intersection midway through her left turn. Cars that had been driving behind the Waymo van also stopped. “I was going to murder someone,” she said.
The hesitation at the intersection is one of many flaws evident in Waymo’s technology, say five people with direct knowledge of the issues in Phoenix. More than a dozen local residents who frequently encounter one of the hundreds of Waymo test vehicles circulating in the area complained about sudden moves or stops. The company’s safety drivers—individuals who sit in the driver’s seat—regularly have to take control of the wheel to avoid a collision or potentially unsafe situation, the people said….
In reality, the vast majority of Waymo’s test cars continue to use safety drivers. Typically, the cars that drive without a person at the wheel have been in relatively small residential areas of Chandler, Ariz., where there is little traffic, according to people familiar with the program. And these vehicles are monitored closely by remote operators that can help the cars when they run into issues. (Waymo last week told the Verge that its first driverless taxis would include a “chaperone” from Waymo who would sit in the cars.)…” Read more Hmmmm…. As I’ve been saying, we are still at the very beginning…. 0.001 degrees Kelvin. Plus “others/non-users” will never like them. Just this morning I honked at the driver in front of me who passed up a gap to make an unprotected left turn. I had to wait for a whole cycle!! I hate every car that drives on Cleveland Lane in front of my house. I want that street all for myself. I hate buses. I hate trucks. I hate everything and everyone but me. This is just human nature. Little respect for others. Heck, I’m the only good driver out there. The innuendos are not surprising. We’ll just have to grin and bear them as we do with all of the conventional cars running around out there.
On a more serious note, this reality demonstrates that we may need regulation/legislation that explicitly protects the rights of driverless cars to share the public road infrastructure. We do this for bicycles, motorcycles and in a way even for trucks and buses. Also, buses, and other vehicles today have signs on their backs that state “This vehicle stops at all RR crossings” because it differs from normal car behavior. I suggest that Waymo and all that are testing driverless vehicles on city streets place a sign on the back of each vehicles:”This Car Obeys All Traffic Laws and Rules. You should too! part27.D3EB71F1.3D476B41@princeton.edu”> part28.FABC3AFA.2282A832@princeton.edu”>Alain
Robot Drivers Need to Navigate More Like Humans
M. Sena, Sept 1, ”
Data Control Is the Key to the Future of Transport
WHOEVER CONTROLS OUR PERSONAL DATA, and determines where and how it is stored and used, will control the future of transport in the age of collective intelligence. Will it be the state or business or ourselves who exercise this control? Will there be one, global approach or different regional, national or local solutions? How we move will depend on the answers to these questions….
1. In their book, Radical Markets: Uprooting Capitalism and Democracy for a Just Cause (Princeton University Press – 2018), Eric A. Posner and E. Glen Weyl argue that the term ‘artificial intelligence’ is a misnomer and should be replaced by ‘collective Intelligence’. I agree…
3. From THE ECONOMIST Special Re-port: Fixing the Internet, June 20th 2018. China: The ultimate walled garden. In this article there is also a quote from Feng Ziang of Tsingua University. He is reputedly one of China’s most prominent legal scholars. “If AI remains under the control of market forces, it will inexorably result in a super-rich oligopoly of data billionaires who reap the wealth created by robots that displace human labour, leaving massive unemployment in their wake.”…” Read more Hmmmm…. Very thought provoking. And see following article. Alain
Bicycles: Are They Good or Bad for Cities?
APOSTASY! YOU SAY. It’s obvious that getting people to use non-polluting forms of transport is better than encouraging them to use cars or even collective transport options, isn’t it. Well, that’s what I want to look at, especially be-cause an increasing share of these two-and-three-wheelers are battery driven, and are being joined by electric kick-scooters, hoverboards and skateboards. Add to this the bike sharing trend, both dock-based (e.g. CitiBike, Santander) and dockless (e.g., Jump, Lime), that are adding millions of bicycles to cities around the world, and you have a phenomenon at which it is certainly worth having a closer look.Robot Drivers Need to Navigate More Like Humans: …” Read more Hmmmm…. How dare you Michael. The major issue with this mode is that its main competitor is walking and in aggregate the VMT, congestion, energy and pollution implications are small because bicycles comprise such a low PersonMilesTraveled…(Maybe 2% of the person trips in the US of which most are less than a couple of miles. If all were put in single occupant cars, VMT would be increased less than 1%. Energy and pollution are largely proportional to VMT). Bicycles have been embraced by “DoTs” and planners because they didn’t incur large budgets and made it seem as if something was being done. Sorry to be so harsh.
Robot Drivers Need to Navigate More Like Humans
Our current navigation systems represent the best we could do with non-seeing machines. They do an adequate job of directing drivers to a specified destination, but they do not model the way humans find their way. If robots with vision sensors are to take over the full driving and wayfinding functions, we need to find a new and better navigation paradigm, both for the maps and the guidance methods used. …
Humans have a sense of place…” Read more Hmmmm…. A very nice piece on In-car navigation; especially the Humans have a sense of place.. part however, as Michael knows, I worked long and hard on turn-by-turn navigation. While certainly not perfect, CoPilot is really very good. The real-time traffic is a phenomenal reliever of anxiety because it so reduces uncertainty about the what’s ahead. The “instantaneous” responsiveness if you miss a turn (not “…tough luck…”) and provision of just the needed information at a glance provide way more safety while driving than any decremented safety associated with any distraction that it may cause. Alain
Franken-algorithms: the deadly consequences of unpredictable code
A. Smith, Aug 30, “The 18th of March 2018, was the day tech insiders had been dreading. That night, a new moon added almost no light to a poorly lit four-lane road in Tempe, Arizona, as a specially adapted Uber Volvo XC90 detected an object ahead. Part of the modern gold rush to develop self-driving vehicles, the SUV had been driving autonomously, with no input from its human backup driver, for 19 minutes. An array of radar and light-emitting lidar sensors allowed onboard algorithms to calculate that, given their host vehicle’s steady speed of 43mph, the object was six seconds away – assuming it remained stationary. But objects in roads seldom remain stationary, so more algorithms crawled a database of recognizable mechanical and biological entities, searching for a fit from which this one’s likely behavior could be inferred.
At first the computer drew a blank; seconds later, it decided it was dealing with another car, expecting it to drive away and require no special action. Only at the last second was a clear identification found – a woman with a bike, shopping bags hanging confusingly from handlebars, doubtless assuming the Volvo would route around her as any ordinary vehicle would. Barred from taking evasive action on its own, the computer abruptly handed control back to its human master, but the master wasn’t paying attention. Elaine Herzberg, aged 49, was struck and killed, leaving more reflective members of the tech community with two uncomfortable questions: was this algorithmic tragedy inevitable? And how used to such incidents would we, should we, be prepared to get?
“In some ways we’ve lost agency. When programs pass into code and code passes into algorithms and then algorithms start to create new algorithms, it gets farther and farther from human agency. Software is released into a code universe which no one can fully understand.”…
Few subjects are more constantly or fervidly discussed right now than algorithms. But what is an algorithm? In fact, the usage has changed in interesting ways since the rise of the internet – and search engines in particular – in the mid-1990s.” Read more Hmmmm…. A MUST read. Alain
Tesla driver arrested for drunk driving after blaming Autopilot for crashing into a fire truck
F. Lambert, Aug 25, “A Tesla Model S crashed into yet another fire truck resulting in two injuries in San Jose earlier today.
The driver said that he “thought the Model S was on Autopilot”, but he was arrested under suspicion of drunk driving.
I have to say “yet another fire truck” because it is the third accident involving Tesla vehicles reportedly on Autopilot and fire trucks this year alone.
It happened in Culver City last January and again in San Jose a few months later. Now the San Jose Fire Fighters are complaining of another accident that happened early this morning. They reported on Twitter that a Tesla vehicle rear-ended one of their fire engines again:
TESLA near miss! For the 2nd time in recent months SJ FF’s escaped serious injury as a @teslamotors “Zero Emissions” vehicle slammed into the back of a #SJFD FireEngine @ 70 MPH on Hwy 101 at 1am – Reportedly the vehicle was in auto mode but auto braking system was not engaged. pic.twitter.com/gDQzXrFZ5S
— SanJoseFireFighters (@SJFirefighters) August 25, 2018
…It’s not the first time an allegedly drunk Tesla driver has tried to use the Autopilot card to get out of it. Earlier this year, a Tesla driver passed out allegedly drunk in his Model S and he told the police that the car was on Autopilot….” Read more Hmmmm…. It sounds like it is the AEB that is at fault and NOT AutoPilot (ACC +lane centering). AEB is supposed to address stationary objects in the lane ahead. (MarketWatch reported: “…The California Highway Patrol said the car rear-ended a fire engine that had stopped on the far-right lane of heavily trafficked Highway 101 with its flashing lights on around 1 a.m…” AutoPilot, which is really only lane centering and Intelligent Cruise Control (aka Adaptive Cruise Control (ACC)), keeps you from crashing into moving vehicles ahead. It is “easy” to identify a vehicle moving in the lane ahead because radar accurately measures the relative speed between your car and the car ahead and your car accurately measures its current speed. Thus there is a clear difference between those two values as long as both vehicles are moving. That clear difference clearly distinguishes that object, of which there is usually just one, (there may be at most a couple of others in the the neighboring lanes), from the typically many stationary object that are in view at any time (telephone poles, trees, overhead signs, traffic lights, …). Those stationary object, because they are stationary have a relative speed that is very similar to your current speed. Because there are always so many, it is not a rare event when one of them might be thought to be in the lane ahead when it actually is something that can be passed under, or is off to the side. It is these false alarms that then cause what are otherwise good members of the Society of Automotive Engineers to decide to do no harm by disengaging the AEB so that it wont apply brakes during these false alarms. Unfortunately, this disengagement causes great harm when the object wasn’t a false alarm, but instead a parked firetruck with its lights flashing. Not pretty!! What continues to be clear is that the SAE should insist that AEBs be substantially improved so that their false alarm rates are reduced to such a small level so that the SAE can “do no harm” without turning off AEB!
What remains unclear about this particular crash is the damage sustained by the Tesla does not seem to be severe enough for the crash to actually occurred with an impact speed anywhere near 70mph. It may be that brakes, AEB or otherwise, may have been applied prior to the crash, but unfortunately, not soon enough. It is imperative that Tesla divulge exactly what both the AEB and AutoPilot “knew and did” in the “10 seconds” prior to this crash. Such data will allow Tesla and everyone else know what can happen in such situations and prevent another car from slamming into the rear of a parked firetruck. If the impact speed was 70mph, then the Tesla’s crash mitigation/safety characteristics are really EXCELLENT! Alain
Former GM R&D Boss, Larry Burns Tells The Tale Of The Automated Car In New Book
S. Abuelsamid, Aug 26, “… While we know which teams ultimately succeeded, Burns and Shulgan fill in much of the back story of what happened over the next four years as this robotic driving technology evolved. If there is one downside to the tale, it’s the focus on the teams from Carnegie Mellon and Stanford with very little told of the other programs…” Read more Hmmmm…. While it is true that very little was told about other programs (and nothing at all about Princeton’s which really didn’t deserve any ink except that it was the only one that was actually all done by undergrads who have gone on to do quite well, thank you), I found the most enlightening aspect to be what has transpired over the last 4 years… the Levandowski dynamics, Ford’s failure to consummate its inside track opportunity to “partner” with Google/Waymo (Bill Ford must have been furious) , the perspective on the Joshua Brown and Elaine Herzberg crashes (the culprit is the explicit, by design, disregard of stationary objects ahead when traveling a speed above some, what is very low, speed. This has to be as reprehensible as what the coders did and senior management approved/motivated @ VW wrt “dieselgate“.) and the recent dynamics between Waymo and Uber. This book is a MUST read. In some places you can skim. In others read every word. Alain
Apple self-driving test car gets rear-ended by a Nissan Leaf in first ever crash
N. Statt, Aug 31, “One of Apple’s autonomous cars, which are currently driving around Sunnyvale, California and other nearby Silicon Valley cities, got into its very first crash one week ago, according to a report filed with the California Department of Motor Vehicles. Like many self-driving car crashes, this one was not the software’s fault. The Apple car, a modified Lexus RX450h SUV carrying special equipment and sensors, was traveling at just 1 mph while preparing to merge onto the Lawrence Expressway in Sunnyvale when a Nissan Leaf rear-ended it going around 15 mph. Apple’s Lexus and the Leaf sustained damage, but neither car’s passengers received any injuries, the report states.
That suggests autonomous software makers have a lot of hurdles ahead of them that go beyond ensuring the software is capable and safe in all situations. “A possible explanation is that these cars don’t drive the same way that people do,” Phil Koopman, a Carnegie Mellon professor and software engineer who consults for self-driving car companies, told Consumer Affairs. “And if they don’t drive the same way that people do, people’s expectations of the vehicles would be incorrect.”…” Read more Hmmmm…. Bull! Humans disregard traffic rules and laws when they drive. Either the rules and laws change to accommodate how people drive and/or, as I suggested above, Apple should put a sign on the back of each of their cars:”This Car Obeys All Traffic Laws and Rules. You should too!
By the way, Why didn’t the Nissan Leaf’s AEB prevent this crash???? What’s up Nissan??? Alain
WHY TESLA’S AUTOPILOT CAN’T SEE A STOPPED FIRETRUCK
J. Stewart, Aug 27, “… Early Saturday morning, a Tesla Model S driving south on the 101 Freeway slammed into the back of a stopped firetruck in San Jose, California… Whatever the particulars, there’s a serious sense of déjà vu here. In January, a Tesla Model S drove into the back of a stopped firetruck on the 405 freeway in Los Angeles County. … In May, a Tesla driver in Utah hit a firetruck at highway speeds… How is it possible that one of the most advanced driving systems on the planet doesn’t see a freaking fire truck, dead ahead?
The car’s manual does warn that the system is ill-equipped to handle this exact sort of situation: “Traffic-Aware Cruise Control cannot detect all objects and may not brake/decelerate for stationary vehicles, especially in situations when you are driving over 50 mph (80 km/h) and a vehicle you are following moves out of your driving path and a stationary vehicle or object is in front of you instead.”
Volvo’s semi-autonomous system, Pilot Assist, has the same shortcoming. Say the car in front of the Volvo changes lanes or turns off the road, leaving nothing between the Volvo and a stopped car. “Pilot Assist will ignore the stationary vehicle and instead accelerate to the stored speed,” Volvo’s manual reads, meaning the cruise speed the driver punched in. “The driver must then intervene and apply the brakes.” In other words, your Volvo won’t brake to avoid hitting a stopped car that suddenly appears up ahead. It might even accelerate towards it.
The same is true for any car currently equipped with adaptive cruise control, or automated emergency braking. It sounds like a glaring flaw, the kind of horrible mistake engineers race to eliminate. Nope. These systems are designed to ignore static obstacles because otherwise, they couldn’t work at all. (Alain added emphasis)
“You always have to make a balance between braking when it’s not really needed, and not braking when it is needed,” says Erik Coelingh, head of new technologies at Zenuity, a partnership between Volvo and Autoliv formed to develop driver assistance technologies and self-driving cars. He’s talking about false positives. On the highway, slamming the brakes for no reason can be as dangerous as not stopping when you need to… ”
Read more Hmmmm…. This is total gobbly gook. None of these situations including the Josh Brown Florida crash require “slamming on the brakes”. How about just slowing down a little and gathering more data so that the system can determine if it is real or false. How about looking behind you and maybe the bakes can be slammed no one is there to run into you. How about improving the design of the radars and imaging systems so that they can distinguish between stationary objects that you can pass under and those that you can’t. And if it is because the car is going too fast for the AEB to work reliably wrt fixed objects ahead (those having closing speeds about equal to vehicle speed), then all intelligent cruise control systems AND automated lane keeping systems should be required/legislated to be turned off. NO hands off the wheel, no feet off the brake at those higher speeds! If OEMs want to sell cars offering these comfort & convenience features, AEB must work.
NHTSA should issue a safety recall to make sure that the cruise control (intelligent or conventional) will not be able to accelerate past the maximum speed at which the AEB will function properly to avert a collision with a stationary object in the lane ahead, not permit the engagement of these systems if the car’s speed is greater than this maximum value AND more forward on requiring AEB on all new vehicles along with the restriction that the speed of the vehicle, controlled manually or automated, will not be permitted to exceed the max speed at which the AEB will brake sufficiently to avert a collision with a stationary object in the lane ahead. It was nice that the OEM volunteered to make AEB standard by 2022. That’s the good news. The problem is that the AEB that the OEMs volunteered is likely to be designed to only work if the car is traveling at or under 25mph and may even be designed to NOT work at all (be actively disengaged) at speeds greater than 25 mph. This is simply NOT good enough. These systems MUST be engaged and work at all car speeds. So if they only work up to 25mpg, then the car must be actively restricted from going faster than 25mph. Can you imagine, if NHTSA does that, how quickly the OEMs are going to get AEB really working. These are flawed systems and they need to be fixed now.
There is also some bull in this article about lidars . The long term solution is NOT necessarily lidar,; it has its host or uncertainties and false alarms at speed and long range. The answer is in using more intelligent filters and better real-time processing of the sensor data coming from vision, radar and/or lidar. Alain
Watch Tesla Autopilot’s latest update driving on winding roads – showing improvements
F. Lambert, Aug 29, “…Tesla’s version 9 software update, which is expected in the coming weeks, is supposedly going to include significant improvements and new features…” Read more Hmmmm…. See the video1, video 2 in the article. They are very impressive; however, does the AEB function properly at the speeds the videos were recorded. If the AEB is disengaged, then AutoPilot should/must be disengaged. Alain
IoT to Improve Accessibility
K. Pyle, Aug 29, “n open and crowd-sourced accessibility lab is how Joe Speed, CTO of ADLINK’s IoT Solutions and Technology group describes Accessible Ollie. The commercial promise of this autonomous, low-speed (<25 MPH) electric mobility pod is to provide high-quality, on-demand transportation to all people, regardless of an individual’s ability….” Read more Hmmmm…. And watch the interview and more. Alain
Taking Tesla’s 14-Day Autopilot Trial For A Spin
K. Field, Aug 29, “esla’s Enhanced Autopilot solution gives owners several new chunks of functionality that work together to make driving a safer, lower stress experience. Features like Summon and Autopark can add immense value in certain situations where tight parking arrangements can make it difficult for a human driver to park. The ability to remotely pull the vehicle out of a parking spot with the phone can be a game changer, but is really just a building block on Tesla’s road to fully autonomous driving….” Read more Hmmmm…. Seems impressive. Does AEB work?? Alain
Elevator Analogy
B. Rutherford, Aug 30, “If you got on an elevator today, you rode on a Level 5 autonomous vehicle. What we’re trying to do is take what’s currently vertical, horizontal.” ..” Read more Hmmmm…. Nice to see Brad using the elevator analogy, however, it is “.. you rode on “Driverless”. What’s “Level 5”???? Alain
Who’s more likely to text and drive—teenagers or their parents?
Q. Fottrell, Aug 25, “Most teenagers don’t remember a world without smartphones. They’ve grown up with warnings about the dangers of smartphones and may have taken those lessons to heart. Nearly 40% of teenage drivers age 14 years and older texted while driving at least once in the month prior, but that alarming figure is lower than several estimates for all adults, according to a new study led by the Center for Injury Research and Policy at Nationwide Children’s Hospital. The study, conducted with researchers from the Centers for Disease Control and Prevention and The Ohio State University, crunched data from 35 states in the CDC’s Youth Risk Behavior Survey.
Texting while driving varied by state — from 26% in Maryland to 64% in South Dakota. “More teens texted while driving in states with a lower minimum learner’s permit age and in states where a larger percentage of students drove,” the study found. “White teens were more likely to text while driving than students of all other races/ethnicities. Texting while driving prevalence doubled between ages 15 and 16 years, and it continued to increase substantially for ages 17 years and up.”…” Read more Hmmmm…. Very interesting, but not pretty. Alain
Mr. Nader Misses the Mark on Driverless Cars
Opinion, Aug 30, “…An additional safety issue with self-driving vehicles is the effect an outage of cellular or GPS service would have on them and those around them…” Read more Hmmmm…. Actually they don’t depend on Cellular or GPS. Outages of Cellular and GPS will probably freak us out more than Waymo. Comments are only so-so. Alain
Model 3 owner claims Tesla’s Autopilot feature saved his life
Y. Heisler, Aug 30, “…Since then, Tesla’s Autopilot feature has improved considerably. And while the feature is by no means close to perfect, the reality is that we don’t often hear stories about Autopilot actually saving lives. After all, when there’s no accident, there’s no story. Unless, of course, it happens to be caught on video.
That said, a Model 3 owner recently posted a video which shows how Tesla’s Autopilot feature prevented what could have otherwise been a serious crash. In fact, the Model 3 owner credits the software with saving his life…” Read more Hmmmm…. See video. Alain
Tesla Model 3 Insurance Approaches Porsche 911 Costs
J. Gilboy, Aug 30, “…Gabi Personal Insurance Agency found that the average annual insurance cost for a Model 3 owner is $2,814 (almost $235 per month) versus a $2,849 average ($237 monthly) for 911 owners, an annual difference of just $35, and a monthly difference of under $3. This is higher than the comparable Chevrolet Volt, which costs an average of $2,102 to insure for a year, or about 75 percent as much….
The insurance company also found that the Tesla Model S and X 75D cost nearly as much to insure as an Audi R8 supercar, at $3,410 and $3,519 annual averages respectively. These high costs are attributed to pricey replacement parts and Tesla-specific body shops. Collision insurance also made up a higher proportion of what a Tesla owner typically pays, making up 49 to 59 percent of costs to the owner, versus only 40 for the average Porsche owner…” Read more Hmmmm… Any discount for AutoPilot? Either insurers are making a bundle or these cars really aren’t safer. Are there enough Model 3s out there to properly assess their expected crash liabilities, Maybe buyers should seriously consider self-insuring collision coverage. Especially those safe drives with AutoPilot who will continue to pay attention and not text. See Model 3 video at end of article. Alain
Our Series B and the Road Ahead
Ro Gupta, Aug 23, “We’re announcing $20M in Series B financing, led by GV (formerly Google Ventures) with continued participation from prior investors including our Series A lead, Matrix Partners….” Read more Hmmmm…. Ro, congratulations!! Alain
Volvo teases potentially self-driving 360c concept in videos
J. Holmes, Aug 31, “s Volvo preparing to show its vision for self-driving cars to the world? New teaser videos that highlight an upcoming concept called the 360c appear to point in that direction….” Read more Hmmmm…. Watch video. Half-baked??? or totally uncooked??? Alain
Yandex Launches the First Autonomous Ride-Hailing Service in Europe
Yandex Blog, Aug 28, “… and keep a safety engineer in the passenger seat..” Read more Hmmmm…. No need to read more now that you realize this is all hype. This is NOT Sputnikish and is at least 2 years behind Uber so it is way worse that Pioneerish. I sure hope that haven’t turned off their AEB. Is all of Europe also more than 2 years behind Uber??? Alain
Driverless cars are catching up again after a bumpy ride
“…This week provided a clue to the answer. On Monday Toyota announced a half-billion dollar investment in Uber to continue developing self-driving cars. Shortly before that, Waymo, Google’s self-driving spin-off, launched its first driverless minivan service on public roads, in Phoenix, Arizona….” Read more Hmmmm…. This simply is not correct. Waymo hasn’t launched a “service” in Phoenix. They are still “testing” with hand-picked, vetted, volunteer customers. Whatever. This is mostly a plug for Larry Burn’s Autonomy and that is fine. Otherwise nothing new. Alain
The One Company Set to Power the Driverless Car and Truck Revolution
D. Lashmet, Aug. 31, “…We’re in the early stages of an unstoppable trend… the rise of self-driving cars….
We’re talking about California-based Nvidia…” Read more Hmmmm…. A shameless plug for nVIDIA, but I pretty much agree. Alain
Calendar of Upcoming Events:
//alaink@exchangeimap.princeton.edu:993/fetch%3EUID%3E/INBOX%3E3022058?part=1.5&filename=lmjdiniodjkflpia.png”>
3rd Annual Princeton SmartDrivingCar Summit
evening May 14 through May 16, 2019
Save the Date; Reserve your Sponsorship
Catalog of Videos of Presentations @ 2nd Annual Princeton SmartDrivingCar Summit
Photos from 2nd Annual Princeton SmartDrivingCar Summit
Program & Links to slides from 2nd Annual Princeton SmartDrivingCar Summit
On the More Technical Side
https://orfe.princeton.edu/~alaink/SmartDrivingCars/Papers/