Friday, October 28, 2011
Tuesday, October 25, 2011
War's Remote-control Future
Unmanned drone attacks and shape-shifting robots
by Anna Mulrine
Global Research, October 23, 2011
Christian Science Monitor
The Pentagon already includes unmanned drone attacks in its arsenal. Next up: housefly-sized surveillance craft, shape-changing 'chemical robots,' and tracking agents sprayed from the sky. What does it mean to have soldiers so far removed from the battlefield?
Pakistanis hold up a burning mock drone aircraft during a May rally against drone attacks in Peshawar.
In 2009, the Brookings Institution estimated that unmanned drone attacks were killing about 10 civilians
for every 1 insurgent in Pakistan. (K. Pervez/Reuters)
In the shadow of a heavily fortified enemy building, US commanders call in a chemical robot, or what looks like a blob. They give it a simple instruction: Penetrate a crack in the building and find out what's inside. Like an ice sculpture or the liquid metal assassin in "Terminator 2," the device changes shape, slips through the opening, then reassumes its original form to look around. It uses sensors woven into its fabric to sample the area for biological agents. If needed, it can seep into the cracks of a bomb to defuse it.
Soldiers hoping to eavesdrop on an enemy release a series of tiny, unmanned aircraft the size and shape of houseflies to hover in a room unnoticed, relaying invaluable video footage.
A fleet of drones roams a mountain pass, spraying a fine mist along a known terrorist transit route – the US military's version of "CSI: Al Qaeda." Days later, when troops capture suspects hundreds of miles away, they test them for traces of the "taggant" to discover whether they have traversed the trail and may, in fact, be prosecuted as insurgents.
IN PICTURES: War by remote control
Welcome to the battlefield of the future. Malleable robots. Insect-size air forces. Chemical tracers spritzed from the sky. It's the stuff of science fiction.
But these are among the myriad futuristic warfighting creations currently being developed at universities across the country with funds from the US military. And the future, in many cases, may not be too far off.
Engineering students at the Naval Postgraduate School in Monterey, Calif., for instance, are now experimenting with chemical taggants on unmanned aerial vehicles (UAVs) like the ones being used in Afghanistan. Sure, the shape-changing chemical robot that slips through cracks may be more Ray Bradbury than battlefield-ready. But the Pentagon, in its perpetual quest to find the next weapon or soldier-saving device – and with scientific assurances that it's possible – is already investing millions to develop it.
"We're not about 20 years, or 10 years, or even five years away – a lot of this could be out in the field in under two years," says Mitchell Zatkin, former director of programmable matter at the Defense Advanced Research Projects Agency, or DARPA, the Pentagon's premier research office.
The development of a new generation of military robots, including armed drones, may eventually mark one of the biggest revolutions in warfare in generations. Throughout history, from the crossbow to the cannon to the aircraft carrier, one weapon has supplanted another as nations have strived to create increasingly lethal means of allowing armies to project power from afar.
But many of the new emerging technologies promise not only firepower but also the ability to do something else: reduce the number of soldiers needed in war. While few are suggesting armies made up exclusively of automated machines (yet), the increased use of drones in Afghanistan and Pakistan has already reinforced the view among many policymakers and Pentagon planners that the United States can carry out effective military operations by relying largely on UAVs, targeted cruise missile strikes, and a relatively small number of special operations forces.
At the least, many enthusiasts see the new high-tech tools helping to save American lives. At the most, they see them changing the nature of war – how it's fought and how much it might cost – as well as helping America maintain its military preeminence.
Yet the prospect of a military less reliant on soldiers and more on "push button" technologies also raises profound ethical and moral questions. Will drones controlled by pilots thousands of miles away, as many of them are now, reduce war to an antiseptic video game? Will the US be more likely to wage war if doing so does not risk American lives? And what of the oversight role of Congress in a world of more remote-control weapons? Already, when lawmakers on Capitol Hill accused the Obama administration of circumventing their authority in waging war in Libya, White House lawyers argued in essence that an operation can't be considered war if there are no troops on the ground – and, as a result, does not require the permission of Congress.
"If the military continues to reduce the human cost of waging war," says Lt. Col. Edward Barrett, an ethicist at the US Naval Academy in Annapolis, Md., "there's a possibility that you're not going to try hard enough to avoid it."
Beneath a new moon, a crew pushes the 2,500-pound Predator drone toward a blacked-out flight line and prepares it for takeoff. The soldiers wheel over a pallet of Hellfire missiles and load them onto the plane's undercarriage. The Predator pilot walks around the aircraft, conducting his preflight check. He then returns to a nearby trailer, sits down at a console with joysticks and monitors, and guides the snub-nosed plane down the runway and into the night air – unmanned and fully armed.
The takeoffs of Predators with metronome regularity here at Kandahar Air Field, in southern Afghanistan, has helped turn this strip of asphalt into what the Pentagon calls the single busiest runway in the world. An aircraft lifts off or lands every two minutes. It's a reminder of how integral drones have become to the war in Afghanistan and the broader war on terror.
Initially, of course, the plan was not to put weapons on Predator drones at all. Like the first military airplanes, they were to be used just for surveillance. As the war in Iraq progressed, however, US service members jury-rigged the drones with weapons. Today, armed Predators and their larger offspring, Reapers, fly over America's battlefields, equipped with both missiles and powerful cameras, becoming the most widely used and, arguably, most important tools in the US arsenal.
Since first being introduced in Iraq and Afghanistan, their numbers have grown from 167 in 2002 to more than 7,000 today. The US Air Force is now recruiting more UAV pilots than traditional ones.
"The demand has just absolutely skyrocketed," says the commander of the Air Force's 451st Operations Group, which runs Predator and Reaper operations in Kandahar.
As their numbers have grown, so has the sophistication with which the military uses them. The earliest drones operated more as independent assets – as aerial eyes that sent back intelligence and dropped their bombs. But today the unmanned aircraft are integrated into almost every operation on the ground, acting as advanced scouts and omniscient surveyors of battle zones. They monitor the precise movements of insurgents and kill enemy leaders. They conduct "virtual lineups," zooming in powerful cameras to help determine whether a suspected insurgent may have carried out a particular attack.
"A lot of the ground commanders won't execute a mission without us," says the Air Force's commander of the 62nd Expeditionary Reconnaissance Squadron in Afghanistan.
Robots, too, have become a far more pervasive presence on America's fields of battle. Remote-control machines that move about on wheels and tracks scour for roadside bombs in Iraq and Afghanistan. Soldiers in the mountains of eastern Afghanistan carry hand-held drones in backpacks, which they assemble and throw into the air to scope out terrain and check for enemy fighters. In the past 10 years, the Pentagon's use of robots has grown from zero to some 12,000 in war zones today.
Part of the exponential rise in the use of UAVs and robots stems from a confluence of events: improvements in technology and America's prolonged involvement in two simultaneous wars.
There is, too, the prospect of more money for military contractors eyeing a downturn in future defense budgets. Today, the amount of money being spent on research for military robotics surpasses the budget of the National Science Foundation, which, at $6.9 billion a year, funds nearly one-quarter of all federally supported scientific research at the nation's universities.
Military officials also see in the new technologies the possibility of savings in an era of shrinking budgets. Deploying forces overseas can now cost as much as $1 million a year per soldier.
Yet the biggest allure of the new high-tech armaments may be something as old as conflict itself: the desire to reduce the number of casualties on the battlefield and gain a strategic advantage over the enemy. As Lt. Gen. Richard Lynch, a commander in Iraq, observed at a conference on military robotics in Washington earlier this year: "When I look at the 153 soldiers who paid the ultimate sacrifice [under my command], I know that 80 percent of them were put in a situation where we could have placed an unmanned system in the same job."
Drones, in particular, seem the epitome of risk-free warfare for the nation using them – there are, after all, no pilots to shoot down. Moreover, the people who run them are often nowhere near the field of battle. Some 90 percent of the UAV operations over Afghanistan are flown by people in trailers in the deserts of Nevada. In Kandahar, soldiers help the planes take off and land and then hand over controls to the airmen in the US.
"We want to minimize the [human] footprint as much as possible," says the 451st Operations Group commander at the Kandahar airfield, where the effects of being close to the war are clearly visible: The plywood walls of the tactical operations center are lined with framed bits of jagged metal from mortars that have fallen on the airfield over the years.
While the distant control of drones may well protect American lives, it raises questions about what it means to have people so far removed from the field of conflict. "Sometimes you felt like God hurling thunderbolts from afar," says Lt. Col. Matt Martin, who was among the first generation of US soldiers to work with drones to wage war and who has written a book – "Predator: The Remote-Control Air War Over Iraq and Afghanistan: A Pilot's Story."
Martin agrees that the unmanned aircraft no doubt reduce American casualties, but wonders if it makes killing "too easy, too tempting, too much like simulated combat, like the computer game Civilization."
It probably doesn't reassure critics that the flight controls for drones over the years have come to resemble video-game contollers, which the military has done to make them more intuitive for a generation of young soldiers raised on games like Gears of Warand Killzone.
Martin knows what it's like to confront the dark side of war, even as he fought it from afar. During one operation, he was piloting a drone that was tracking an insurgent. Just after he fired one of the aircraft's missiles, two children rode their bicycles into range. They were both killed. "You get good at compartmentalizing," says Martin.
What worries critics is those who are too good at it – and the impact in general of waging war at a distance. Some fret about the mechanics of the decisionmaking process: Who ultimately makes the decision to pull the trigger? And how do you decide whom to put on the hit list – a top Al Qaeda official, yes, but is some petty but persistent insurgent a matter of national security?
As the US increasingly uses drones in its secret campaigns, questions arise about how much to inform America's allies about UAV attacks and whether they alienate local populations more than they help subdue the enemy, which the US has starkly, and almost weekly, confronted with its drone campaign in Pakistan.
From the US military's viewpoint, the drone war has been fantastically successful, helping to kill key Al Qaeda operatives and Taliban insurgents with a minimum of civilian casualties and almost no US troops put at risk.
Some even believe that the ethical oversight of drones is far more rigorous than that of manned aircraft, since at least 150 people – ground crews, engineers, pilots, intelligence analyzers – are typically involved in each UAV mission.
The issue of what's a minimum of civilian losses is, of course, subjective. In 2009, the Brookings Institution, a Washington think tank, estimated that the US drone war was killing about 10 civilians for every 1 insurgent in Pakistan. That may be far fewer casualties than would be killed with traditional airstrikes. But it is hardly comforting to the Pakistanis.
Moreover, the very practice of taking out enemy leaders or sympathizers could at some point, according to detractors, devolve into an aerial assassination campaign. When the US used a drone strike last month to kill jihadist cleric and American-bornAnwar al-Awlaki in Yemen, President Obama hailed it as a "major blow" to Al Qaeda in the Arabian Peninsula. But some critics decried the killing of a US citizen with no public scrutiny.
Barrett, who is the director of research at the Naval Academy's Stockdale Center for Ethical Leadership, discusses with his students the prospect of whether UAVs make it easier to wage war if the government doesn't have to worry about a public outcry. "There are not the mass numbers of troops moving around and visible, so it could be easier to circumvent the oversight of Congress and, therefore, legitimate authority," he notes.
Others ask a more simple but practical question: What about the troops who conduct the UAV strikes from the Nevada desert – could they become legitimate targets of America's enemies at, say, a local mall, bringing the war on terror to the suburbs?
Some worry that the US is, in fact, placing too heavy a burden on its UAV troops. Despite warnings that "video-game warfare" might make them callous to killing, new studies suggest that the stress levels drone operators face are higher than those for infantry forces on the ground.
"Having this idea of a 'surgical war' where you can really just pinpoint the bad guys with the least amount of damage to our own force, there's a bit of naiveté in all that," says Maryann Cusinamo Love, an associate professor at Catholic University of Americain Washington, D.C.
She says the powerful cameras on the drones allow pilots to see in "great vivid detail the real-time results of their actions. That is an incredible stress on them."
It is also, she argues, a "ghettoization of the killing function in war." However justified the military mission may be, she says, "You are still giving the most stressful job of war disproportionately to this one subset of people."
Nearly as long as militaries have existed, they have invented arms to keep their soldiers as far away from danger as possible. Some sound ridiculous, others terrifying, but most have raised questions of fairness in warfare.
During World War II, Japanese forces used the jet stream to launch paper "fire balloons" rigged with bombs meant to explode when they drifted over US soil. One such balloon discovered by an American family during a picnic in the Oregon woods resulted in the only deaths in the continental US caused by enemy hostilities in the war.
For their part, US scientists experimented with a form of bio-inspired warfare: a "bat bomb" that they planned to launch in parachute-rigged casings over Japan. They imagined fitting the bodies of tiny bats with incendiary bombs on timers. The theory was that the bats, once dropped, would roost in the eaves and attics of Japan's delicate wooden dwellings, setting off fires. The technology was successfully tested but scrapped when it was deemed too expensive by the Pentagon.
On the Western front, Germany was experimenting with a remote-control tank known as the Goliath. It used technology pioneered by an American who had demonstrated a remote-control boat years earlier at Madison Square Garden in New York City. When he tried to sell his technology to the US military, however, he was met with ridicule.
"He said, 'I've got this technology,' but they started laughing – they thought he was crazy," says Peter Singer, author of "Wired for War: The Robotics Revolution and Conflict in the 21st Century."
With the advent of the US wars in Iraq and Afghanistan, however, technology has once again rendezvoused with military necessity. A company called iRobot in Bedford, Mass., sent a prototype of its PackBot, which soldiers began using to clear caves and bunkers suspected of being mined. When the testing period was over, "The Army unit didn't want to give the test robot back," Mr. Singer notes.
While the use of robots that can detect and defuse explosives is growing exponentially, the next big frontier for America's military R2-D2s may parallel what happened to drones: They may be fitted with weapons – offering new fighting capabilities as well as raising new concerns.
Already, researchers are experimenting with attaching machine guns to robots that can be triggered remotely. Field tests in Iraq for one of the first weaponized robots, dubbed SWORDS, didn't go well.
"There were several instances of noncommanded firing of the system during testing," says Jeffrey Jaczkowski, deputy manager of the US Army's Robotic Systems Joint Project Office.
Though US military officials tend to emphasize that troops must remain "in the loop" as robots or drones are weaponized, there remains a strong push for automation coming from the Pentagon. In 2007, the US Army sent out a request for proposals calling for robots with "fully autonomous engagement without human intervention." In other words, the ability to shoot on their own.
"Let's put it this way," says Lt. Col. David Thompson, project manager of the Army's robotic office. "We've seen the success of unmanned air vehicles that have been armed. This [weaponizing robots] is a natural extension."
At the Georgia Institute of Technology in Atlanta, Ronald Arkin is researching a stunning premise: whether robots can be created that treat humans on the battlefield better than human soldiers treat each other. He has pored over the first study of US soldiers returning from the Iraq war, a 2006 US Surgeon General's report that asked troops to evaluate their own ethical behavior and that of their comrades.
He was struck by "the incredibly high level of atrocities that are witnessed, committed, or abetted by soldiers." Modern warfare has not lessened the impact on soldiers. It is as stressful as ancient hand-to-hand combat with axes, he argues, because of the sorts of quick decisions that fighting with modern technology requires.
"Human beings have never been designed to operate under the combat conditions of today," he says. "There are many, many problems with the speed with which we are killing right now – and that exacerbates the potential for violation of laws of war."
With Pentagon funding, Dr. Arkin is looking at whether it is possible to build robots that behave more ethically than humans – to not be tempted to shoot someone, for instance, out of fear or revenge.
The key, he says, is that the robot should "first do no harm, rather than 'shoot first, ask questions later.' "
Such technology requires what Arkin calls an "ethical adaptor," which involves following orders. Learning, he explains, is potentially dangerous when it comes to making decisions about whether to kill. "You don't want to hand soldiers a gun and say, 'Figure out what's right and wrong.' You tell them what's right and wrong," he says. "We want to do the same for these robotic systems."
The aim, says Arkin, is not to be perfect, "but if we can achieve this goal of outperforming humans, we have saved lives – and that is the ultimate benchmark of this work."
Other research into armed robots centers not so much on outperforming humans as being able to work with them. In the not-too-distant future, military officials envision soldiers and robots teaming up in the field, with the troops able to communicate with machines the way they would with a human squad team member. Eventually, says Thompson, the robot-soldier relationship could become even more collaborative, with one human soldier leading many armed robots.
After that, the scenarios start to become something more out of the realm of film studios. For instance, retired Navy Capt. Robert Moses, president of iRobot's government and industrial relations division, can envision the day of humanless battlefields.
"I think the first thing to do is to go ahead and have the Army get comfortable with the robot," he says. One day, though, "you could write a scenario where you have an unmanned battle space – a 'Star Wars' approach."
These developments raise questions that ethicists are just beginning to unravel. This includes Peter Asaro, who last year formed the International Committee for Robot Arms Control. He's grappling with conundrums like: What, to a machine, counts as "about to shoot me?" How does a robot make a distinction between a dog, a man, and a child? How does it tell an enemy from a friend?
Such things are not entirely abstract. An automated "sentry robot" now stands guard in the demilitarized zone between North and South Korea, equipped with heat, voice, and motion sensors, as well as a 5 mm machine gun. What if it starts firing, accidentally or otherwise?
Within their own ranks, military officials are asking themselves similar questions. In March, the Navy launched a program at its postgraduate school in Monterey that explores the legal, social, and cultural impacts of unmanned systems. "Are we going to give the ability to a robot for conducting a killing operation based on its own software and sensors?" asks retired Navy Capt. Jeffrey Kline, who is directing the new effort. "That rightly causes a lot of red flags."
In part, military officials feel they have to develop these new systems to stay ahead of America's enemies, many of whom will be creating their own versions of automated armies. Yet that could lead to what some consider a 21st-century arms race and encourage others to use the new weapons.
Late last month, federal authorities charged a Massachusetts man with plotting an attack on the US Capitol and the Pentagon using a large, remote-controlled aircraft filled with explosives. Earlier this year, Libyan rebels contacted Aeryon Labs Inc., a Canadian drone manufacturer, about buying a small unmanned helicopter. "Ultimately, I think they found us through Googling. That's how a lot of people find us," says Dave Kroetsch, Aeryon's president. Aeryon officials say they get inquires from militaries all over the world, which is one reason they have decided not to sell weaponized drones.
In the end, the emerging era of remote-control warfare – like evolutions in warfare throughout history – will likely create profound new capabilities as well as profound new problems for the US. The key will be to minimize the one over the other.
"There are many futures that can be created," says Georgia Tech roboticist Arkin. "Hopefully, we can create, I won't say a utopian, but at least not a dystopian one."
Global Research Articles by Anna Mulrine
Monday, October 24, 2011
I don’t know about how others manage their lives. By reading biographies of famous people, they changed their lives mostly the results of some dramatic incidents, like Steve Jobs who got fired by the company that he found, or other cases like dealing with death of a love one, etc. However, I rarely read what made the famous people make incremental changes of their lives. The change that I’m talking about is not like changing from brand toothpaste to brand B. What I mean here is something like point of view changes that lead to change of actions, more or less in the self-improvement area. Of course, most biographies would only pick the significant events to write, it is really hard for a writer as a third party to write this kind of subtle smaller changes that went on inside the head of another person. Purely speculation would be more in the realm of psychology rather than writing biography. On the other hand, for autobiographies, people would usually write what they do and how they do, but less about why they do and particularly rare about what the causes behind the ‘why’. Thus, I don’t really know if my thinking process is any different from the others. I’ve not openly talked about this with anyone, cuz it is rather personal and I doubt people would be interested in how dots are connected in my brain. As such, this is only where I’m gonna talk about it, it starts here and end here.
As the name of my blog has said, which is something that I believe in, Only Change is Forever. The upcoming change is largely internal driven within my brain. Graphically speaking, it is like there is a fuse and there are few chemicals. Somehow, my brain mixed the chemicals to become fire powder, and I’m gonna light them up at the fuse. As this is a ‘controlled’ demolition, so it should bomb the ‘right’ way. With that, what is that fuse? The answer is the feelings of sad, unmotivated, exhausted, suppressed, basically a bowl of negative energy that has overshadowed my life in recent months. Deep down in my mind, I know that if I don’t do something about that, things will crack here and there and shit gonna happen.
Now, what are the chemicals? There are not related by any mean except in my head. These are just few things that happen recently that I happened to ‘receive’ them here and there and connect them somehow.
The first ‘chemical’ is a news story that I read about Steve Jobs’ comment on Android. I’m not gonna drill down that in details as I probably will blog about that separately in later days. The thing that I wanna highlight here is that though he certainly had grudge towards Android, but he reportedly, still gave advice to Larry Page, one of the founders of Google, upon his request about improvement in running Google. Steve told Larry to ask himself what the important things about Google are, and stopped doing those that are not important; otherwise, Google will become another Microsoft. Then, Larry stopped few things that Google was working on and began to focus on the five or six things that he wants Google to really do. What I got from this story is that as resourceful and super-smart as a big company, there is only so much it can do. On its way of development, it would still need to make decision to filter what are important and focus on them in order to be successful down the road. Well, I’m just a person, not a company, but I think it would apply that to me too.
The second chemical is a wedding that I attended recently which gave me a chance to catch up with some old colleagues. Nothing dramatic happened there, just some informal but candid conversation about the latest status of some folks that I know. Some of them are obviously pretty well off, at least doing quite well superficially. While others are fine but I would say that I ‘may’ be better than them in certain aspects of life. I didn’t really feel particularly proud of or bad for myself. I just reminded me that there are options. We can remain unchanged to certain extents while the world is changing around us. However, that may lead us to change involuntary later since the external factors are much powerful than us. On the other hand, we can make relatively controllable voluntary change in spite of being in a steady environment. Namely, we change before being forced to change. There is no right or wrong for either one, either way we will just have to bear the consequences. The bottom line is that I need to realize that there is always more than one way while I’m on my way.
The third chemical is a ‘semi-insomnia’ night that I had recently. I usually sleep well. Insomnia doesn’t happen to me more than few nights a year. In that night, I was doing self-reflection again. Somehow, I remember a story that may be I read long times ago, and maybe ‘partially’ made up by myself. It is about a Japanese Imperial army that had been left behind in a jungle in Philippines many years after the end of WWII. He didn’t know his country had surrendered, so he had remained faithful to his duty all these years. The perhaps ‘make-up’ part is that he found out the end of war from an encounter of an old lady who was washing her clothes at the bank of a river. He was trying to threat her somehow, and the old lady didn’t feel threaten, and just told him to just give up and stop doing what he had been doing. Somehow in mind, I emphasized that solider. Just like being struck by a lightening, I can imagine or visualize I were him. While looking at the sunset, with my dry lips, fatigued body, I disarm myself, and drop down on my knee. I feel exhausted and abandoned. Then, I just tell myself that ‘that’s it, it’s time to go.’
The result of mixing those chemicals in my mind is that I’m gonna reshuffle my priorities and then reallocate my limited resources, i.e. time, money, and energy, accordingly. Cuz, in the past few months (may be longer); I had allocated my attention and time on things that I found interesting but not important. As a result, I began to see cracks here and there. Though there may not be very noticeable by others (or they know it but just not telling me), I think it’s time to do something about them.
Of course, I still need to safeguard my privacy for what exactly I’m gonna do. I’m not gonna list them out in details here. What I would say is that, I just need to look at myself in the mirror and asking who I am to the people around me. The next step is to see what I can do better in those roles. That’s what I’m gonna do.
Hopefully, the ‘bomb’ will explode as planned, not on my face! i.e. things will work out my way.
Friday, October 14, 2011
For other smaller polishes of functions here and there, I'm not gonna talk about them here as they are quite nicely presented in Apple.com or other great sites like ilounge.com. What I wanna talk about is a function that I don't have the chance to play with but it is certainly gaining buzz in the net as we speak - Siri, a personal secretary function with AI from a company acquired by Apple last year.
As presented in Apple's latest keynote, Siri can answer plain English questions from its iPhone owner in plain English. It should be a very useful tool to visually-impaired people, and people always on the go. So far, I don't think there is any dark side of this app being reported yet. On the contrary, I've started to find many funny responses from pioneer users of Siri who can get a hand on iPhone 4S in advance. Check this, this and this sites out. Simply based on my imagination, I would say that Siri will be viewed by some 'loners' as their virtual companion to talk to. Cuz, what loners need are attention and someone to talk to. Siri can say 'No' to questioner, but it has to answer that. Thus, I can imagine a loner would talk to Siri all the time in spite of rejections from Siri, the loner would still love it, at least his/her questions come with responses. Besides loners, most regular users can't help but would get a laugh with Siri's response.
On the other hand, come to think of it, Siri kinda reminds me of 'Hal', the computer with AI in the movie of '2001: Space Odyssey'. Damn! Steve Jobs is such a genius and wise enough to incorporate some kinds of AI in iPhone. I guess he must be smiling in Heaven if he knows what users will respond to Siri and be creative with this app down the road.
I'm sure this app will be viewed as a killer app for iPhone very soon and will be continuously enhanced. The potential of Siri is limitless. Just imagine, a robber with GPS turn on on his iPhone, while chasing by cops, he asked his Siri, 'Where should I go so that the cop can't find me?'. I doubt the current version of Siri can really help the robber. However, imagine the future Siri gives the robber a map and tells him where to go. Then, the cops ask the question 'Where would Siri tell the guy that I'm chasing to go while trying to loose us on the track?' Would the cop's Siri tell him where would the robber's Siri instruct?? That would be a quite interest scenario.
Also, would users be able to 'personalize' their own Siri? I think Apple may 'pretend' to let users to do that, but actually Siri is a master of its own. I'm not saying Siri will become the 'Skynet' in Terminators or the 'Machine' in Matrix, but I can't help but visualizing the relationship between users and Siri will some day make a switch from 'Masters to Servant' to 'Servants to Master'. Why I am saying that is because we are all getting lazy to do things ourselves, and becoming more dependent on machines/AI. Currently, we are going from searching the weather info from a weather app to asking Siri what the weather is. What would be next? Someday, we may be toying by the machine and become a slave of it. It is certainly a possibility.
Anyway, that's just some thoughts about this amazing app from Apple. Whether what I said will come true or not down the road, we will see.....
Thursday, October 13, 2011
As I'm not sure if the post will stay for access for long. I'm just gonna copy and paste it below for anyone interested. Actually, the readers' comment on the post in the link above are actually as interesting to read as the post itself. Check this out.
This post was intended to be shared privately and was accidentally made public. Thanks to +Steve Yegge for allowing us to keep it out there. It's the sort of writing people do when they think nobody is watching: honest, clear, and frank.
The world would be a better place if more people wrote this sort of internal memoranda, and even better if they were allowed to write it for the outside world.
Hopefully Steve will not experience any negative repercussions from Google about this. On the contrary, he deserves a promotion.
This post has received a lot of attention. For anyone here who arrived from The Greater Internet - I stand ready to remove this post if asked. As I mentioned before, I was given permission to keep it up.
Google's openness to allow us to keep this message posted on its own social network is, in my opinion, a far greater asset than any SaS platform. In the end, a company's greatest asset is its culture, and here, Google is one of the strongest companies on the planet.
Steve Yegge's profile photoSteve Yegge originally shared this post:
Stevey's Google Platforms Rant
I was at Amazon for about six and a half years, and now I've been at Google for that long. One thing that struck me immediately about the two companies -- an impression that has been reinforced almost daily -- is that Amazon does everything wrong, and Google does everything right. Sure, it's a sweeping generalization, but a surprisingly accurate one. It's pretty crazy. There are probably a hundred or even two hundred different ways you can compare the two companies, and Google is superior in all but three of them, if I recall correctly. I actually did a spreadsheet at one point but Legal wouldn't let me show it to anyone, even though recruiting loved it.
I mean, just to give you a very brief taste: Amazon's recruiting process is fundamentally flawed by having teams hire for themselves, so their hiring bar is incredibly inconsistent across teams, despite various efforts they've made to level it out. And their operations are a mess; they don't really have SREs and they make engineers pretty much do everything, which leaves almost no time for coding - though again this varies by group, so it's luck of the draw. They don't give a single shit about charity or helping the needy or community contributions or anything like that. Never comes up there, except maybe to laugh about it. Their facilities are dirt-smeared cube farms without a dime spent on decor or common meeting areas. Their pay and benefits suck, although much less so lately due to local competition from Google and Facebook. But they don't have any of our perks or extras -- they just try to match the offer-letter numbers, and that's the end of it. Their code base is a disaster, with no engineering standards whatsoever except what individual teams choose to put in place.
To be fair, they do have a nice versioned-library system that we really ought to emulate, and a nice publish-subscribe system that we also have no equivalent for. But for the most part they just have a bunch of crappy tools that read and write state machine information into relational databases. We wouldn't take most of it even if it were free.
I think the pubsub system and their library-shelf system were two out of the grand total of three things Amazon does better than google.
I guess you could make an argument that their bias for launching early and iterating like mad is also something they do well, but you can argue it either way. They prioritize launching early over everything else, including retention and engineering discipline and a bunch of other stuff that turns out to matter in the long run. So even though it's given them some competitive advantages in the marketplace, it's created enough other problems to make it something less than a slam-dunk.
But there's one thing they do really really well that pretty much makes up for ALL of their political, philosophical and technical screw-ups.
Jeff Bezos is an infamous micro-manager. He micro-manages every single pixel of Amazon's retail site. He hired Larry Tesler, Apple's Chief Scientist and probably the very most famous and respected human-computer interaction expert in the entire world, and then ignored every goddamn thing Larry said for three years until Larry finally -- wisely -- left the company. Larry would do these big usability studies and demonstrate beyond any shred of doubt that nobody can understand that frigging website, but Bezos just couldn't let go of those pixels, all those millions of semantics-packed pixels on the landing page. They were like millions of his own precious children. So they're all still there, and Larry is not.
Micro-managing isn't that third thing that Amazon does better than us, by the way. I mean, yeah, they micro-manage really well, but I wouldn't list it as a strength or anything. I'm just trying to set the context here, to help you understand what happened. We're talking about a guy who in all seriousness has said on many public occasions that people should be paying him to work at Amazon. He hands out little yellow stickies with his name on them, reminding people "who runs the company" when they disagree with him. The guy is a regular... well, Steve Jobs, I guess. Except without the fashion or design sense. Bezos is super smart; don't get me wrong. He just makes ordinary control freaks look like stoned hippies.
So one day Jeff Bezos issued a mandate. He's doing that all the time, of course, and people scramble like ants being pounded with a rubber mallet whenever it happens. But on one occasion -- back around 2002 I think, plus or minus a year -- he issued a mandate that was so out there, so huge and eye-bulgingly ponderous, that it made all of his other mandates look like unsolicited peer bonuses.
His Big Mandate went something along these lines:
1) All teams will henceforth expose their data and functionality through service interfaces.
2) Teams must communicate with each other through these interfaces.
3) There will be no other form of interprocess communication allowed: no direct linking, no direct reads of another team's data store, no shared-memory model, no back-doors whatsoever. The only communication allowed is via service interface calls over the network.
4) It doesn't matter what technology they use. HTTP, Corba, Pubsub, custom protocols -- doesn't matter. Bezos doesn't care.
5) All service interfaces, without exception, must be designed from the ground up to be externalizable. That is to say, the team must plan and design to be able to expose the interface to developers in the outside world. No exceptions.
6) Anyone who doesn't do this will be fired.
7) Thank you; have a nice day!
Ha, ha! You 150-odd ex-Amazon folks here will of course realize immediately that #7 was a little joke I threw in, because Bezos most definitely does not give a shit about your day.
#6, however, was quite real, so people went to work. Bezos assigned a couple of Chief Bulldogs to oversee the effort and ensure forward progress, headed up by Uber-Chief Bear Bulldog Rick Dalzell. Rick is an ex-Armgy Ranger, West Point Academy graduate, ex-boxer, ex-Chief Torturer slash CIO at Wal*Mart, and is a big genial scary man who used the word "hardened interface" a lot. Rick was a walking, talking hardened interface himself, so needless to say, everyone made LOTS of forward progress and made sure Rick knew about it.
Over the next couple of years, Amazon transformed internally into a service-oriented architecture. They learned a tremendous amount while effecting this transformation. There was lots of existing documentation and lore about SOAs, but at Amazon's vast scale it was about as useful as telling Indiana Jones to look both ways before crossing the street. Amazon's dev staff made a lot of discoveries along the way. A teeny tiny sampling of these discoveries included:
- pager escalation gets way harder, because a ticket might bounce through 20 service calls before the real owner is identified. If each bounce goes through a team with a 15-minute response time, it can be hours before the right team finally finds out, unless you build a lot of scaffolding and metrics and reporting.
- every single one of your peer teams suddenly becomes a potential DOS attacker. Nobody can make any real forward progress until very serious quotas and throttling are put in place in every single service.
- monitoring and QA are the same thing. You'd never think so until you try doing a big SOA. But when your service says "oh yes, I'm fine", it may well be the case that the only thing still functioning in the server is the little component that knows how to say "I'm fine, roger roger, over and out" in a cheery droid voice. In order to tell whether the service is actually responding, you have to make individual calls. The problem continues recursively until your monitoring is doing comprehensive semantics checking of your entire range of services and data, at which point it's indistinguishable from automated QA. So they're a continuum.
- if you have hundreds of services, and your code MUST communicate with other groups' code via these services, then you won't be able to find any of them without a service-discovery mechanism. And you can't have that without a service registration mechanism, which itself is another service. So Amazon has a universal service registry where you can find out reflectively (programmatically) about every service, what its APIs are, and also whether it is currently up, and where.
- debugging problems with someone else's code gets a LOT harder, and is basically impossible unless there is a universal standard way to run every service in a debuggable sandbox.
That's just a very small sample. There are dozens, maybe hundreds of individual learnings like these that Amazon had to discover organically. There were a lot of wacky ones around externalizing services, but not as many as you might think. Organizing into services taught teams not to trust each other in most of the same ways they're not supposed to trust external developers.
This effort was still underway when I left to join Google in mid-2005, but it was pretty far advanced. From the time Bezos issued his edict through the time I left, Amazon had transformed culturally into a company that thinks about everything in a services-first fashion. It is now fundamental to how they approach all designs, including internal designs for stuff that might never see the light of day externally.
At this point they don't even do it out of fear of being fired. I mean, they're still afraid of that; it's pretty much part of daily life there, working for the Dread Pirate Bezos and all. But they do services because they've come to understand that it's the Right Thing. There are without question pros and cons to the SOA approach, and some of the cons are pretty long. But overall it's the right thing because SOA-driven design enables Platforms.
That's what Bezos was up to with his edict, of course. He didn't (and doesn't) care even a tiny bit about the well-being of the teams, nor about what technologies they use, nor in fact any detail whatsoever about how they go about their business unless they happen to be screwing up. But Bezos realized long before the vast majority of Amazonians that Amazon needs to be a platform.
You wouldn't really think that an online bookstore needs to be an extensible, programmable platform. Would you?
Well, the first big thing Bezos realized is that the infrastructure they'd built for selling and shipping books and sundry could be transformed an excellent repurposable computing platform. So now they have the Amazon Elastic Compute Cloud, and the Amazon Elastic MapReduce, and the Amazon Relational Database Service, and a whole passel' o' other services browsable at aws.amazon.com. These services host the backends for some pretty successful companies, reddit being my personal favorite of the bunch.
The other big realization he had was that he can't always build the right thing. I think Larry Tesler might have struck some kind of chord in Bezos when he said his mom couldn't use the goddamn website. It's not even super clear whose mom he was talking about, and doesn't really matter, because nobody's mom can use the goddamn website. In fact I myself find the website disturbingly daunting, and I worked there for over half a decade. I've just learned to kinda defocus my eyes and concentrate on the million or so pixels near the center of the page above the fold.
I'm not really sure how Bezos came to this realization -- the insight that he can't build one product and have it be right for everyone. But it doesn't matter, because he gets it. There's actually a formal name for this phenomenon. It's called Accessibility, and it's the most important thing in the computing world.
The. Most. Important. Thing.
If you're sorta thinking, "huh? You mean like, blind and deaf people Accessibility?" then you're not alone, because I've come to understand that there are lots and LOTS of people just like you: people for whom this idea does not have the right Accessibility, so it hasn't been able to get through to you yet. It's not your fault for not understanding, any more than it would be your fault for being blind or deaf or motion-restricted or living with any other disability. When software -- or idea-ware for that matter -- fails to be accessible to anyone for any reason, it is the fault of the software or of the messaging of the idea. It is an Accessibility failure.
Like anything else big and important in life, Accessibility has an evil twin who, jilted by the unbalanced affection displayed by their parents in their youth, has grown into an equally powerful Arch-Nemesis (yes, there's more than one nemesis to accessibility) named Security. And boy howdy are the two ever at odds.
But I'll argue that Accessibility is actually more important than Security because dialing Accessibility to zero means you have no product at all, whereas dialing Security to zero can still get you a reasonably successful product such as the Playstation Network.
So yeah. In case you hadn't noticed, I could actually write a book on this topic. A fat one, filled with amusing anecdotes about ants and rubber mallets at companies I've worked at. But I will never get this little rant published, and you'll never get it read, unless I start to wrap up.
That one last thing that Google doesn't do well is Platforms. We don't understand platforms. We don't "get" platforms. Some of you do, but you are the minority. This has become painfully clear to me over the past six years. I was kind of hoping that competitive pressure from Microsoft and Amazon and more recently Facebook would make us wake up collectively and start doing universal services. Not in some sort of ad-hoc, half-assed way, but in more or less the same way Amazon did it: all at once, for real, no cheating, and treating it as our top priority from now on.
But no. No, it's like our tenth or eleventh priority. Or fifteenth, I don't know. It's pretty low. There are a few teams who treat the idea very seriously, but most teams either don't think about it all, ever, or only a small percentage of them think about it in a very small way.
It's a big stretch even to get most teams to offer a stubby service to get programmatic access to their data and computations. Most of them think they're building products. And a stubby service is a pretty pathetic service. Go back and look at that partial list of learnings from Amazon, and tell me which ones Stubby gives you out of the box. As far as I'm concerned, it's none of them. Stubby's great, but it's like parts when you need a car.
A product is useless without a platform, or more precisely and accurately, a platform-less product will always be replaced by an equivalent platform-ized product.
Google+ is a prime example of our complete failure to understand platforms from the very highest levels of executive leadership (hi Larry, Sergey, Eric, Vic, howdy howdy) down to the very lowest leaf workers (hey yo). We all don't get it. The Golden Rule of platforms is that you Eat Your Own Dogfood. The Google+ platform is a pathetic afterthought. We had no API at all at launch, and last I checked, we had one measly API call. One of the team members marched in and told me about it when they launched, and I asked: "So is it the Stalker API?" She got all glum and said "Yeah." I mean, I was joking, but no... the only API call we offer is to get someone's stream. So I guess the joke was on me.
Microsoft has known about the Dogfood rule for at least twenty years. It's been part of their culture for a whole generation now. You don't eat People Food and give your developers Dog Food. Doing that is simply robbing your long-term platform value for short-term successes. Platforms are all about long-term thinking.
Google+ is a knee-jerk reaction, a study in short-term thinking, predicated on the incorrect notion that Facebook is successful because they built a great product. But that's not why they are successful. Facebook is successful because they built an entire constellation of products by allowing other people to do the work. So Facebook is different for everyone. Some people spend all their time on Mafia Wars. Some spend all their time on Farmville. There are hundreds or maybe thousands of different high-quality time sinks available, so there's something there for everyone.
Our Google+ team took a look at the aftermarket and said: "Gosh, it looks like we need some games. Let's go contract someone to, um, write some games for us." Do you begin to see how incredibly wrong that thinking is now? The problem is that we are trying to predict what people want and deliver it for them.
You can't do that. Not really. Not reliably. There have been precious few people in the world, over the entire history of computing, who have been able to do it reliably. Steve Jobs was one of them. We don't have a Steve Jobs here. I'm sorry, but we don't.
Larry Tesler may have convinced Bezos that he was no Steve Jobs, but Bezos realized that he didn't need to be a Steve Jobs in order to provide everyone with the right products: interfaces and workflows that they liked and felt at ease with. He just needed to enable third-party developers to do it, and it would happen automatically.
I apologize to those (many) of you for whom all this stuff I'm saying is incredibly obvious, because yeah. It's incredibly frigging obvious. Except we're not doing it. We don't get Platforms, and we don't get Accessibility. The two are basically the same thing, because platforms solve accessibility. A platform is accessibility.
So yeah, Microsoft gets it. And you know as well as I do how surprising that is, because they don't "get" much of anything, really. But they understand platforms as a purely accidental outgrowth of having started life in the business of providing platforms. So they have thirty-plus years of learning in this space. And if you go to msdn.com, and spend some time browsing, and you've never seen it before, prepare to be amazed. Because it's staggeringly huge. They have thousands, and thousands, and THOUSANDS of API calls. They have a HUGE platform. Too big in fact, because they can't design for squat, but at least they're doing it.
Amazon gets it. Amazon's AWS (aws.amazon.com) is incredible. Just go look at it. Click around. It's embarrassing. We don't have any of that stuff.
Apple gets it, obviously. They've made some fundamentally non-open choices, particularly around their mobile platform. But they understand accessibility and they understand the power of third-party development and they eat their dogfood. And you know what? They make pretty good dogfood. Their APIs are a hell of a lot cleaner than Microsoft's, and have been since time immemorial.
Facebook gets it. That's what really worries me. That's what got me off my lazy butt to write this thing. I hate blogging. I hate... plussing, or whatever it's called when you do a massive rant in Google+ even though it's a terrible venue for it but you do it anyway because in the end you really do want Google to be successful. And I do! I mean, Facebook wants me there, and it'd be pretty easy to just go. But Google is home, so I'm insisting that we have this little family intervention, uncomfortable as it might be.
After you've marveled at the platform offerings of Microsoft and Amazon, and Facebook I guess (I didn't look because I didn't want to get too depressed), head over to developers.google.com and browse a little. Pretty big difference, eh? It's like what your fifth-grade nephew might mock up if he were doing an assignment to demonstrate what a big powerful platform company might be building if all they had, resource-wise, was one fifth grader.
Please don't get me wrong here -- I know for a fact that the dev-rel team has had to FIGHT to get even this much available externally. They're kicking ass as far as I'm concerned, because they DO get platforms, and they are struggling heroically to try to create one in an environment that is at best platform-apathetic, and at worst often openly hostile to the idea.
I'm just frankly describing what developers.google.com looks like to an outsider. It looks childish. Where's the Maps APIs in there for Christ's sake? Some of the things in there are labs projects. And the APIs for everything I clicked were... they were paltry. They were obviously dog food. Not even good organic stuff. Compared to our internal APIs it's all snouts and horse hooves.
And also don't get me wrong about Google+. They're far from the only offenders. This is a cultural thing. What we have going on internally is basically a war, with the underdog minority Platformers fighting a more or less losing battle against the Mighty Funded Confident Producters.
Any teams that have successfully internalized the notion that they should be externally programmable platforms from the ground up are underdogs -- Maps and Docs come to mind, and I know GMail is making overtures in that direction. But it's hard for them to get funding for it because it's not part of our culture. Maestro's funding is a feeble thing compared to the gargantuan Microsoft Office programming platform: it's a fluffy rabbit versus a T-Rex. The Docs team knows they'll never be competitive with Office until they can match its scripting facilities, but they're not getting any resource love. I mean, I assume they're not, given that Apps Script only works in Spreadsheet right now, and it doesn't even have keyboard shortcuts as part of its API. That team looks pretty unloved to me.
Ironically enough, Wave was a great platform, may they rest in peace. But making something a platform is not going to make you an instant success. A platform needs a killer app. Facebook -- that is, the stock service they offer with walls and friends and such -- is the killer app for the Facebook Platform. And it is a very serious mistake to conclude that the Facebook App could have been anywhere near as successful without the Facebook Platform.
You know how people are always saying Google is arrogant? I'm a Googler, so I get as irritated as you do when people say that. We're not arrogant, by and large. We're, like, 99% Arrogance-Free. I did start this post -- if you'll reach back into distant memory -- by describing Google as "doing everything right". We do mean well, and for the most part when people say we're arrogant it's because we didn't hire them, or they're unhappy with our policies, or something along those lines. They're inferring arrogance because it makes them feel better.
But when we take the stance that we know how to design the perfect product for everyone, and believe you me, I hear that a lot, then we're being fools. You can attribute it to arrogance, or naivete, or whatever -- it doesn't matter in the end, because it's foolishness. There IS no perfect product for everyone.
And so we wind up with a browser that doesn't let you set the default font size. Talk about an affront to Accessibility. I mean, as I get older I'm actually going blind. For real. I've been nearsighted all my life, and once you hit 40 years old you stop being able to see things up close. So font selection becomes this life-or-death thing: it can lock you out of the product completely. But the Chrome team is flat-out arrogant here: they want to build a zero-configuration product, and they're quite brazen about it, and Fuck You if you're blind or deaf or whatever. Hit Ctrl-+ on every single page visit for the rest of your life.
It's not just them. It's everyone. The problem is that we're a Product Company through and through. We built a successful product with broad appeal -- our search, that is -- and that wild success has biased us.
Amazon was a product company too, so it took an out-of-band force to make Bezos understand the need for a platform. That force was their evaporating margins; he was cornered and had to think of a way out. But all he had was a bunch of engineers and all these computers... if only they could be monetized somehow... you can see how he arrived at AWS, in hindsight.
Microsoft started out as a platform, so they've just had lots of practice at it.
Facebook, though: they worry me. I'm no expert, but I'm pretty sure they started off as a Product and they rode that success pretty far. So I'm not sure exactly how they made the transition to a platform. It was a relatively long time ago, since they had to be a platform before (now very old) things like Mafia Wars could come along.
Maybe they just looked at us and asked: "How can we beat Google? What are they missing?"
The problem we face is pretty huge, because it will take a dramatic cultural change in order for us to start catching up. We don't do internal service-oriented platforms, and we just as equally don't do external ones. This means that the "not getting it" is endemic across the company: the PMs don't get it, the engineers don't get it, the product teams don't get it, nobody gets it. Even if individuals do, even if YOU do, it doesn't matter one bit unless we're treating it as an all-hands-on-deck emergency. We can't keep launching products and pretending we'll turn them into magical beautiful extensible platforms later. We've tried that and it's not working.
The Golden Rule of Platforms, "Eat Your Own Dogfood", can be rephrased as "Start with a Platform, and Then Use it for Everything." You can't just bolt it on later. Certainly not easily at any rate -- ask anyone who worked on platformizing MS Office. Or anyone who worked on platformizing Amazon. If you delay it, it'll be ten times as much work as just doing it correctly up front. You can't cheat. You can't have secret back doors for internal apps to get special priority access, not for ANY reason. You need to solve the hard problems up front.
I'm not saying it's too late for us, but the longer we wait, the closer we get to being Too Late.
I honestly don't know how to wrap this up. I've said pretty much everything I came here to say today. This post has been six years in the making. I'm sorry if I wasn't gentle enough, or if I misrepresented some product or team or person, or if we're actually doing LOTS of platform stuff and it just so happens that I and everyone I ever talk to has just never heard about it. I'm sorry.
But we've gotta start doing this right.
Tuesday, October 11, 2011
The recent news reported that scientists have claimed the discovery of sub-atomic particles apparently traveling faster than light. As such, Einstein’s special theory of relativity is being challenged. That also leads to some people rethinking about the possibility of time travel where the basic premise of which was travelling faster than light. Of course, it is still just talks for the moment as everything about time travel is very much way beyond what current level of science can do. Anyway, I’ve long been fascinated of the whole idea of time travel since I was a kid. Though I’m relatively more matured and rational these days, I still love to read anything about time travel every now and then. For those who are interested, just Google it or searches it on Youtube, there are a lot of materials that are entertaining as well as educational about this topic.
I’m no expert in talking about the scientific side of time travel, so those terms like wormhole, parallel universe, etc are totally beyond me. However, having reading or watching materials that related to time travel myself, I do have some thoughts about this topic myself that I guess nobody would care, but I just wanna talk about them here anyway.
The basic idea is that if time travel is possible and I am given the opportunity to do so, what I would think next?
I guess that the following would be questions that I would ask:
- Is the time travel process itself safe? I.e. would there be any health consequence, such as exposure to radiation or rapid aging?
- How long would it take to travel? Is that proportional to how far back the time that I wanna go? Cuz, let’s say if it take a month to travel back a year of time, I may not wanna go.
- How accurate is the time and destination of the process? Any ‘undo’ button?
- How to determine if the destination is safe in physical sense? Cuz that would basically determine the risk of travel. E.g. if the current place of destination is on land, but it was actually in the middle of ocean back then, I may sink into the ocean once I got there.
- Would I be able to come back? That’s the key question. If the answer is no, then, forget it.
- Given if I can come back, at what point I should/can come back to? Let’s say I begin my trip on Jan 1, 1pm. Should I come back on Jan 1, 1:01pm? Let’s say if I go back to the past and stay there for 24 hours, if I come back at the mentioned time, theoretically, I should age for 1 extra day which is not much. However, if I stay in the past for a longer period of time, then I may be much older suddenly when I come back to current time. That would be an issue.
- Also, can time travel allow me to go to future? Or just in the past?
- Can I bring something from the past to present or bring something from the present back to the past?
Actually, come to think of the above, particularly the last question, it is a very important one. I don’t know about the others, for me, the greatest attraction of time travel is to have a first person view of some historical events and the world in the past. My greatest concern, beside my own safety, is whether my presence in the past would have any butterfly effect to the world that would change history. Then, you may ask what could change history? I guess bringing something from the present or vice versa would do. Just imagine what would happen if I left a handgun 1000 years ago, or left an iPhone for Da Vinci to find. I think that I have the ability to suppress my temptation to bring something from the past to present as souvenirs. However, bring something from present to the past would be hard not to do, especially for keeping my safety sake. Yes, I can bring whatever I brought to the past back with me, but that can’t be for sure. The only way to make sure I will not leave anything in the past is not to bring anything. But, that would be a very tough decision to make, unless it is like in the ‘Terminator’ movie that only our naked body can travel through time and I can’t bring anything with me. But that would be too risky for me and also gives me the great concern on exactly by what mean I would use to travel through time. Is it a machine like a time capsule? Or else. Cuz, if such device exists, I can use it to travel back in time alright, but how can I find it in or carry that with me to the past which allow me to use it to come back? That’s a paradox itself.
Certainly, what I mentioned above are based on certain logical rationale. Actually, if there is such thing as time travel, I would rather be able to do that with my ‘soul’, rather than together with my physical body. Yes, it would be tempting to not just see and hear (assuming our soul can do them given that we don’t have the physical eyes and ears), but also touch and taste something in the past as well. For example, I would like to breathe the air, and taste the food and drink the water in the pre-industrialized world. However, for the sake of not putting my safety at risk, I wouldn’t mind to skip them. Cuz, how would I guarantee my personal safety in the past? The people there in the past may not be nice and naïve as we think, they could capture me out of fear or curiosity, and God knows what they would do to me next. As such, rather than having the chance of being just a passive observer of events, I would become a prisoner or victim myself due to my presence. That would certainly be a ‘no-no’ outcome I would try my best to stay away from. Thus, I think the best way, or I would say my preferred way to time travel must be doing that with my soul only. So, I can see what I wanna see without creating any karma for being in the past events.
Well, if all the concerns that I mentioned above can be properly addressed, then here it comes the exciting part of what I really wanna see in the past. There are some many things that I wanna see for myself that I doubt I would have time to see them all given that I am still a mortal being for God sake. Even if just my soul travels, my body would still age, and even if my body can be frozen somehow to stay young, the current world with people that I know will sail on without me. I guess I don’t wanna miss them. So, I’ve picked the following ten events, not ranked by any means that I wanna see if my soul can do time travel.
- Jesus was taken up into the heaven 40 days after his resurrection – I really wanna see this, perhaps the most important historical and religious figure in last two millenniums. He did so many miracles, but for give and take, I would choose to see his ‘really’ final act on earth.
- Roswell UFO crash – enough controversies as a result of this incident, I gotta see this myself for the craft, bodies, and all the other stuffs first hand on site.
- Last stone being put on top of Giza the Pyramid – I think witnessing how the last stone was being installed should answer how pyramids were built.
- A few hours tour on Atlantis – just wanna see how advance this mythical continent was.
- A tour of the tombs of first emperor of China and Genghis Khan – just wanna see if what were written in history books were true or not.
- See T-rex and other dinosaurs for real – enough said.
- See Da Vinci drawing Mona Lisa – to see the most intelligent person known to modern civilization to draw the most recognized painting of all time.
- See the Apollo moon-landing in 1969 – just wanna see if it is the greatest hoax or achievement of modern history.
- Attend the whole Bilderberg meeting or Bohemian Grove meeting in 2011 – just wanna hear the richest and most powerful to discuss the latest on how they will run our world.
- See my granddad who I’ve never seen myself, and my parents when they were young – gotta take the chance for something personal.
Would you wanna see something else?