[log in to unmask]" _mf_state="1" title="null" src="cid:[log in to unmask]" width="44" height="44" border="0"> The SmartDrivingCars eLetter, Pod-Casts, Zoom-Casts and Zoom-inars are made possible in part by support from the Smart Transportation and Technology ETF, symbol MOTO. For more information: www.motoetf.com. Most funding is supplied by Princeton University's Department of Operations Research & Financial Engineering and Princeton Autonomous Vehicle Engineering (PAVE) research laboratory as part of its research dissemination initiative
G.
Bensinge, July
30, " One of
the greatest
tricks
technology
companies ever
played was
convincing
their human
guinea pig
users that
they were a
privileged
group called
beta testers.
From novel
email software
to alternative
versions of
Twitter to
voice-enabled
listening
devices, such
trials are
cheap and easy
to make
available to
thousands or
millions of
customers.
It's a great
way to see how
a new version
stacks up
against the
old.
Other than
some annoying
glitches or
unfamiliar
icons,
software beta
testing is
generally
innocuous. The
stakes for
most apps are
far below life
and death.
But there's
nothing
innocuous
about the beta
tests being
run by Elon
Musk, the
billionaire
C.E.O. of
Tesla. He has
turned
American
streets into a
public
laboratory for
the company's
supposed
self-driving
car
technology.
Tesla says
that its
inaccurately
named full
self-driving
and autopilot
modes are
meant to
assist drivers
and make
Teslas safer,
but autopilot
has been at
the center of
a series of
erratic
driving
incidents.
In public, Mr.
Musk sometimes
overhypes
these
technologies
on social
media and in
other
statements.
Yet Tesla
engineers have
privately
admitted to
California
regulators
that they are
not quite
ready for
prime time..
If Tesla
wants to run
beta tests
with human
guinea pigs,
it should do
so on a closed
track ...
Unfortunately,
they've,
presumably,
done that and
checked that
box! ......"
Read
more Hmmmm...
Beta testing
using a
responsible
privileged
group is
actually very
good; however,
the Beta
Testers must
be responsible
and not
"loose
canons".
The
objective of
Beta testing
is to uncover
problems.
Challenges are
to be
expected.
Beta testers must
be
instructed/messaged
carefully by
all in the
organization.
The
false aura
that Elon
creates/ed to
sell his
product is
completely
counter
productive to
beta testing
the product to
uncover its
weaknesses.
Consequently ALL
problems
and
shortcomings
with
AutoPilot/FSD
come from the
head. Elon
needs to be
held
accountable.
Nikola's
ex-chairman is
being charged
with lying to
investigators;
Elon may need
to be charged
with lying to
his customers
and Beta
testers. He
is really good
and really
creative, but
he needs a
little
humility. He
needs to step
forward and
accept
responsibility
here; else, he
needs to be
indicted for
lying to his
customers.
Poor
Nikola Tesla...The
two guys that
leveraged his
good name have
been taking
liberties that
he can't be
thrilled
about. Alain
R.
Stern, July 9,
"No doubt,
Rafaela
Vasquez should
have seen
pedestrian
Elaine
Herzberg
sooner on
March 18,
2018, and
taken action
before the
autonomous
Uber vehicle
she was riding
in hit and
killed her.
Widely seen
interior video
from a camera
inside the
Volvo SUV
shows that
Vasquez was
not looking at
the road in
the seconds
before the
impact.
But there's
far more to
the story than
that, and
Vasquez's
defense team
says the grand
jury didn't
get to hear
information
critical to
the case
before
deciding to
indict her
last September
on a charge of
negligent
homicide.
Yavapai County
Attorney
Sheila Polk
decided that
Uber was not
criminally
liable in the
crash in March
2019.
Her private
lawyers,
Albert
Morrison and
Marci Kratter,
filed an
extensive
motion in
Maricopa
County
Superior Court
on Tuesday
demanding that
the case be
remanded back
to the grand
jury for a new
determination
of probable
cause...." Read
more Hmmmm...In
short my
ethics say...
Yes! See also
Vasquez
Remand Motion,
July 9.
The
algorithm
"saw" Elaine 6
seconds before
it hit her.
The algorithm
wasn't written
to side on
caution ...
slowing down
to take more
time to
resolve its
confusion.
The algorithm
was written in
such a way
that it simply
continued on
"full steam
ahead". The
algorithm had
disabled the
Automated
Emergency
Braking (AEB)
system. The
AEB was
supposed to be
explicitly
deactivated
only at speeds
under 40 mph,
yet the
algorithm had
the car
traveling at
41 mph.
Finally, the
AEB itself may
have been
miscoded to
explicitly
disregard
objects in the
lane ahead for
which the
component of
their speed in
the direction
of the lane
centerline is
sensed to be
zero. Please
don't write
code that does
that!. Much
of this
miscoding by
those that
devise, chart
and write
these
algorithms is
out of a
tendency to
prefer comfort
over
safety/caution.
The
act of driving
down a road
naturally
involves the
encounter with
numerous
objects for
which "their
speed in the
direction of
the lane
centerline" is
in fact zero.
These are all
of the
stationary
objects one
encounters
when
traveling.
Buildings
along the side
of the road,
parked cars,
telephone
poles, picket
fences,
pedestrians
waiting
patiently for
the light to
change, etc.
Unfortunately,
the sensors
that sense
these objects,
including
LiDAR, are not
perfect
(nothing is),
and will,
while rarely,
misplace these
objects as
being in the
lane ahead.
Moreover,
there are
stationary
object that
are indeed
correctly
sensed to be
in the lane
ahead, but
these can
readily be
passed
under...
overhead
signs, tree
canopies and
overpasses.
Consequently,
none of these
stationary
objects pose
any danger.
They can
readily be
passed under
if they are
really in the
lane ahead and
can be readily
bypassed, if
they are
mis-located
common
stationary
objects that
line the road
ahead...
Unless it
really is an
object whose "speed
in the
direction of
the lane
centerline" is
zero and it is
really located
in the lane
ahead, as it
was with
Elaine
Herzberg....
and
with the rash
of Tesla
crashes with
trucks
sprawled
across the
lane ahead,
firetrucks and
police
cruisers
parked in the
lane ahead, NJ
barriers
located in the
center of an
inappropriately
striped exit
lane, and
trees in the
lawn ahead.
Luckily, stationary objects in
travel lanes
are extremely
rare, but,
unfortunately,
sensors and
algorithms
much more
often
mis-position
objects in the
lane ahead
that are
actually
beside the
lane, not in
the lane. To
avoid the
"discomfort"
of slowing
down to be
sure, these
algorithms
have been
written to
disregard,
rather than be
careful.
I my view, it is those that have
written and
implemented
these
algorithms
that are the
true folks
that are
"responsible"
for this
tragic crash.
They didn't
have to write
the algorithms
that way.
They could
have written
them to be
better and
more rarely
mis-position
stationary
object.
Moreover, they
knew they had
a problem
here, because
the code
over-simplistically
and
irresponsibly
dismisses its
shortcoming.
It is the way
this code was
written that
caused this
crash. The
code required
Rafaela to
save it from
this
disaster. I
doubt that
Raphaela was
informed about
this
fundamental
shortcoming in
the code.
Consequently, my ethics side that
she is
wrongfully
charged.
Whether or not
the algorithm
designers and
coders need to
be charged, is
another
question.
They certainly
should be
aware that
they are
complicit
here. So
should the
Society of
Automotive
Engineers who
preaches
"cause no
harm' and thus
suggest that
one never
brake when one
shouldn't be
braking. The
person who is
tailgating you
may rear-end
you. In a
perfect world,
then maybe.
But, all of
us, except for
maybe SAEers,
get confused,
miss identify,
mis locate and
hopefully we
all do hit the
brakes at
least a little
to give us
some time to
get things
straight.
This
philosophy
should also
apply to these
automated
gizmos. Alain
Many, July
30, "At the
2021 Knight
Smart Cities
Lab, we'll
help community
leaders and
technologists
explore how to
leverage
federal
funding, data
and digital
technology to
help make
strong
decisions and
improve
quality of
life for
residents in
2021 and
beyond...."
Read
more Hmmmm...
Sorry I didn't
link earlier.
I hope that
the Knight
Foundation
posts the
recordings of
the sessions,
especially
PANEL 4:
EQUITY &
MOBILITY —
AV
ROUND-TABLE
It was very
good, although
the tendency
continues to
be one of
"educating
communities"...
selling them
what we think
is good for
them, rather
than being
educated by
communities
that, for
whatever
reason, can't
or wish not to
drive a car:
"What
improvement(s)
would you like
to come about
in the way you
currently
travel to the
places you go
to
frequently?"
"What
improvement(s)
would you like
to come about
in the way you
currently
travel to the
places you go
to
infrequently
that would
cause you to
travel to
those places
more often?"
and, "What
improvement(s)
would you like
to come about
in new ways to
travel that
would cause
you to travel
to places
you'd like to
go to but have
chosen to not
go there?"
If we are really trying to deploy
a system that
improves the
quality of
life of those
that have been
mobility
marginalized
by the
personal
automobile,
then that
system needs
to be designed
and deployed
to address the
questions
above. Alain
J. Albert,
July 31, "The
driver of a
Tesla involved
in a fatal
crash that
California
highway
authorities
said may have
been on
operating on
Autopilot
posted social
media videos
of himself
riding in the
vehicle
without his
hands on the
wheel or foot
on the pedal.
The May 5
crash in
Fontana, a
city 50 miles
(80
kilometers)
east of Los
Angeles, is
also under
investigation
by the
National
Highway
Traffic Safety
Administration. The probe is the 29th case involving a Tesla that the
federal agency
has probed.
In the Fontana
crash, a
35-year-old
man identified
as Steven
Michael
Hendrickson
was killed
when his Tesla
Model 3 struck
an overturned
semi on a
freeway about
2:30 a.m ..."
Read
more Hmmmm...Here
goes the
broken record
again... why
is AutoPilot
being blamed
when it is the
job of the
Automated
Emergency
Braking (AEB)
system to keep
the car from
crashing into
stationary
objects
ahead??? I
think that
most, if not
everyone, of
those 29
crashes
involve
crashes with
stationary
objects in the
lane ahead and
most, if not
all, show no
engagement of
the AEB system
to prevent or
mitigate the
crash. Why
doesn't
Tesla's AEB
work??????????
Alain
R.
Mitchell, July
30, "Apple
Chief
Executive Tim
Cook and Tesla
Chief
Executive Elon
Musk are
talking on the
phone. The
2016 unveiling
of the
make-it-or-break-it
Model 3 is
coming soon,
but Tesla is
in serious
financial
trouble. Cook
has an idea:
Apple buys
Tesla.
Musk is
interested,
but one
condition:
"I'm CEO".
Sure, says
Cook. When
Apple bought
Beats in 2014,
it kept on the
founders,
Jimmy Iovine
and Dr. Dre.
No, Musk says.
Apple. Apple
CEO.
"F... you"
Cook says, and
hangs up...
So goes the
juiciest
allegation in
"Power
Play:
Tesla, Elon
Musk and the
Bet of the
Century" by
Wall Street
Journal
reporter Tim
Higgins. The
secondhand
anecdote is
atypical in a
way "Higgins
doesn't break
much news or
gossip" but it
also nicely
encapsulates
this sweeping
history of the
electric-car
juggernaut, a
company that
often seems to
innovate and
thrive in
spite of its
founder rather
than as a
result of his
vaunted
genius..." Read
more Hmmmm...Very
interesting!!!! See SDC PodCast/ZoomCast with Tim
Higgins.
Alain
F. Lambert,
July 21, "A
new study
dispels the
persistent
myth that
electric cars
pollute just
as much as
gas-powered
cars because
they charge on
a "dirty"
electric grid,
and mining for
battery
materials is
polluting.
While electric
cars have no
tailpipe
emissions,
unlike
vehicles
equipped with
an internal
combustion
engine, they
still pollute
through the
energy needed
to produce
them, like any
other product,
and with the
electricity
used to charge
them if it's
not renewable.
However, it
has been
commonly
understood
that electric
vehicles are
still more
efficient than
their
gas-powered
counterparts
throughout
their entire
life cycle
despite those
sources of
emissions....
" Read
more Hmmmm...Unfortunately,
I don't
agree. Since
the process
involves the
replacement of
an ICE with an
EV, the EV
must be
burdened with
the emissions
associated
with the
additional
electricity
that it will
consume, and
not the
average
electricity
that it will
consume. In
order to
address the
climate
issues, as new
clean energy
sources come
on-line, the
most polluting
means of
generating
electricity
are being
turned off in
order to most
sustainably
serve current
electricity
users. Thus
the
electricity
used by each
new user is
responsible
for turning
back on the
source that
was most
recently
turned off...
thus, that EV
that you
bought instead
of an ICE
burns the
least
sustainable
energy.. today
that is coal.
Tomorrow, when
ever that is,
that least
sustainable
electricity
may well be
solar, but
that day only
comes when all
other users of
electricity
use solar or
something even
less
polluting.
Just trying to
be fair here.
Alain
H. Spitzer,
July 28, "With
autonomous
vehicles, one
of the
fundamental
elements of
creating a
safe and
capable
self-driving
system (SDS)
is being able
to accurately
perceive all
objects in the
car's
environment.
Recently, a
Tesla owner
cruising down
a California
highway
received a
surprising
notification
on their car's
dashboard
display. The
vehicle's
Autopilot
driver-assist
system spotted
the moon high
in the sky,
perfectly
circular and
tinted yellow
in a haze of
wildfire
smoke, and
made an
incorrect
assumption:
yellow traffic
signal ahead!
To identify
and navigate
traffic
signals,
Tesla's
Autopilot
relies solely
on visual
interpretation
in the moment,
an approach
spotlighted in
its
misperception
of the moon.
In contrast to
driver-assist,
the approach
taken by
self-driving
system
developers
like Argo AI,
Waymo, and
Aurora, uses a
combination of
data from
multiple
sensors and
high-definition
3D maps to
avoid
mistaking
streetlights
for moonlight.
..." Read
more
Hmmmm... Largely click-bait...
I'm sure Prof.
Ramanan did
not say that
Lidar never
mis-identifies
objects and
didn't suggest
that Tesla
doesn't
re-evaluate
what it sees
many times
during the
course of each
and every
second. He
also did not
suggest that
he's never
mis-identified
something.
The challenge
here is that
when you have
n different
things focused
on answering a
question,
which answer
do you choose
if there isn't
unanimous
agreement?
One doesn't
know if the
answer is
correct even
if it all
sensors return
the same
answer. When
they disagree,
which is
chosen? There
is an
algorithm
someplace that
makes that
choice. I
suspect that
Tesla looked
at this part
of their code
and realized
that this part
of the
algorithm
usually/always
favored image
processing
over radar
signal
processing if
there was
disagreement.
If that is/was
the case, then
radar is
useless when
trying to
answer this
question.
This situation
must have
existed for
many questions
which would
lead anyone to
realize, radar
is
expendable. A
long time ago,
Elon saw Lidar
as being
expendable for
probably the
same reason
H. Poser'77, Sept 13, 2020. "Creating Value for Light Density Urban Rail Lines" . See slides, See video Hmmmm... Simply Brilliant. Alain
These
editions re
sponsored by
the SmartETFs
Smart
Transportation
and Technology
ETF, symbol
MOTO. For more
information
head to www.motoetf.com
July 12
-> 15,
"..." Read
more Hmmmm...I haven't been able to
find a public
source for any
of the content
from the
symposium but
there were at
least three
sessions (of
the few that I
was able to
attend) that
were really
good. One was
B-101-
An inside Look
at
Policy-Making
for Automated
Vehicles,
moderated by
Baruch
Feigenbaum of
the Reason
Foundation.
Pay particular
attention to
the insights
offered by
Kevin Biesty
of Arizona
DoT. So far,
no one in the
world has done
it better.
A second one was