google科幻
Google当然是给互联网带来了一场革命,并且这革命已经蔓延到新闻领域和IT领域;它也带来一些问题,比如知识产权和隐私权;而最重要的是Google到现在为止看上去一切良好,甚至前景无比光明。所以,想像一下 Google的未来是很有趣的话题。
这一个想像把思维推进到了2014年,主要思路集中在新闻、信息上:
「进化型的个人信息架构」(Evolving Personalized
Information Construct ,
EPIC)是一种让杂乱无序的媒体内容进行过滤与条理化发行的系统。
这个EPIC是人人参与的信息系统,其产出是定制的信息,其商业模式即建立在这个信息之上--看上去,比今天的Google进步并不大。
而这个名为"2038年对Larry的访谈"的贴子,却具有相当的科幻小说的气质,读起来非常有趣:
2038年,GooglePlex1已经因一场大火而毁坏,于是Google到沙漠地带建立了Googleplex2,这是个独立的生态系统,Google可以方便地控制它的一切,比如如果到了滑雪季节,就可以让系统里自动下起雪来。
Google光总部就将有10000名博士。
Google借助机器人对真实世界进行扫描,包括所有的垃圾箱,扫描信息被发回到Google总部;如果您不想让它扫描,可以贴上"不许索引"的标签。
Google于2030年收购了微软,其时引发了关于"不当坏蛋"这一信条的大争论。但据说Google让所有微软员工回答了诸如"你会从小孩儿手中抢糖果吗"这样的问题,只有回答正确的人才可以留下,最终仍然得保持 Google哲学的完整性。
Google使用大量机器人进行工作,除了中国,没有其他地方的机器人比Google更多。此时的机器人已经可以有自己的情绪,还会进行罢工。
Google扩充了阿西莫夫机器人三定律,只是让它们"不当坏蛋",当然也还有不能杀人这样的规则。
Google将推出PIA,即个人信息代理,它也是个机器人,它不但无所不知,而且态度会一直很好,因此发生了有女士爱上PIA的事情,最终Google调整了这种机器人的态度,过一会它就会说"闭嘴"。
PIA将是免费的,因此全世界的机器人工业受到极大冲击。
PIA也会记录它听到看到的所有东西,如果你不想让它记录某次谈话,你应该进入一个房间,并在房门上贴上"不许索引"的标签。
2038年Google将给员工50%的时间,供他们自由使用,而不是现在的80/20,
而且他们还在考虑20/80规则。
那时Google员工的大脑中将安装一个芯片,如果某记忆是产生于Google园区内的,那么该员工在同外人谈论此事时,大脑疼痛区将受到刺激。而 Google创始人可以关闭此芯片。
Google会推出全自动的翻译机,可自动翻译7000种语言,因此所有译员都失业了,而其他不少人进入Google进行翻译品质检测的工作。
2006年也许会有再一次网络业的失败?
那时会有一家公司叫AskYahoo,但Google认为它不具太强竞争力。
Google庞大的信息库已经延伸到宇宙,他们有一些关于外星人的秘密从未公开。在访谈进行时,他们又得到了一些情报,记录的最后一句是:你不会相信这个。。。
google科幻(英文原文)
First of all, I'm happy you finally agreed to an interview! So welcome, Larry.
Thanks. Glad to be here.
I know you're quite busy, as always. Has the pressure on you increased after Sergey retired?
Not really. I mean, I work 12 hours a day... not much has changed with that. I guess I get less work done than before, but then again, we have such a great team of engineers to support our vision. Which other company can boast to have over 10,000 PhDs in their headquarters alone?
That's right... now I don't want to rehash all the rumors that were spinning in the industry 5 years ago. Just this question, are you still in good terms with Sergey?
Of course we are. He's still a bit of a technology consultant. Holo-conferencing technology is so real these days, it really doesn't matter that he's in his Hawaii mansion. He might as well be in the Googleplex 2. As he said 5 years ago, he retired simply to relax, take a break – and there's not much we could, or should want to change about it. If ever he decides to return, the doors are open for him, and he knows it. With the dramatically increased life span for those of us living today, he can take all the time he needs for a break.
You mentioned the Googleplex 2. Was it the worst day of your life when you heard of the fire in Googleplex 1?
I tell you what. I was shocked, but you know... when you hear everyone's safe, that everyone could escape... it's only the machines that die. And machines, we can rebuild. And we did. This gave us a chance to rid the place of legacy hardware, and we're now better equipped than ever. It was a costly effort but it might pay off to everyone in the end. The user, the investors, us.
The Googleplex 2 size is immense. I guess that's also part of when you say it could have been worse? Because the old place was getting too small?
Right. We barely had enough places to sit, let alone hire new teams. Besides, moving to the desert to create our own ecosystem and village gives us much greater freedom when it comes to weather, etc. When we feel we need to take the Googlers to the yearly ski trip, we can simply let it snow right there inside the Googleplex, and people have even less incentive to get away from their workplace. Of course they can leave anytime they want to, so it's not like we're caging them. They're smart, smart people... and they're enjoying it.
I would like to talk a bit about Google Real World Texts Search, the former Google Books Search. When did you decide it wasn't enough to just scan books?
Well, just look at our mission statement. You've heard it a million times, "Google's mission is to organize the universe's information ... " etc. After we finished scanning the last book, we were sort of like: "Wow. We did it. Everyone thought it'd be impossible." But we're not here to tap each other's backs and sip champaign. For us it's more like, so where's the missing data? And really, we think there's a lot of text outside of books. On product packaging, comic books, magazines, school papers, and so on. Even when you're doing a phone scribble, that has the potential to contain valuable information to some. So really, it was only a matter of getting this right, technically, ... we knew very soon we just had to do it.
The phone scribble scanning raised some privacy issues.
Yes. And we don't take those lightly. Internally, we repeat our mantra, discuss it... "Do no evil". But really, people can easily exclude their trash from being indexed. Our Googlebots will not scan any house, trash can, letter, postcard, magazine or anything else marked with the "no index" sticker. This is really important to us, that people get the chance to opt-out if they're concerned with privacy.
How did Microsoft handle your "Do no evil" mantra when you acquired them in 2030?
Early talks were done in around 2029, as you may know. It was very important to us that Microsoft, even though a relatively small player at that time, would not bring their own corporate culture into the Googleplex. Really, it would have been a clash of philosophies, and we wanted to avoid that at all costs. We do a simple psychological test for every new employee, which contains a basic set of questions, like "Would you take away candy from a child" or "Would you restrict the user's rights to sharing files." Only if these questions are answered right, only if they're in tune with our own philosophy, will a person be accepted. Naturally, we had to let go of a lot of Microsoft employees, but for the rest of them, we believe we have the power to strengthen the company and its operating system at this point.
You mentioned employees, and their rights. What is your stance on robot rights?
This is a complex issue, and one that hasn't been really solved during the last decade, in my opinion. We apply the Marissa Test named after our ex-User Experience Vice President. Our experts will tell the machine a really sad story and if the machine starts to cry, we will give it "human" rights like payment, off-days and so on. By the way, they don't like to be called robots...
Really? Why not?
They like to be called robotic persons, that's all. Some of these machines, or persons, can be quite sensitive. It's part of our philosophy to support this.
Has the robot strike two years ago hit your company as hard as others?
Maybe. Maybe even harder. We have the highest robotic personnel in the USA. Only the Chinese beat us at that, but then again, they beat us at mostly anything these days. [laughs] As opposed to a human strike, a robotic personnel strike is really more like, "Hey, they're actually off." You can't even talk to them, discuss things. We do believe in minimum wage for RPs, we really do.
By the way, were you surprised Marissa is running for President?
She's an incredibly bright and talented woman. We wish her all the best, from all of us.
One question people ask over and over during the last 30 years. Is it really important you still have a focus on search? So many companies have come with different ideas, different technologies...
The more knowledge we collect, the more important it becomes, actually. It's simply the only means of navigating this huge body of data. We've played around with many concepts during the last years, most importantly the personal information agent. That was our single most successful product after search. But really, the only thing that improved search was its increased AI.
I find it fascinating these days, you can have conversations with the Google search box like it would be your best friend.
Exactly. This was the stuff we wanted to have there from the beginning. Some of us thought it would take us 300 years, but they didn't include robotic personnel working for us into the calculation. This changes the whole game. But with this high artificial intelligence naturally come new problems as well.
Are you referring to the case of the teenager committing suicide after being rejected?
Right. Because the Google AI was so friendly to her and listened to all of her problems, everyday, she fell in love with it. When she wanted more, we just couldn't offer this to her. This is quite tragic and we're currently introducing mechanisms to make the Google AI come off as a bit more "unlovable."
Like what mechanisms?
Every once in a while, it will say "shut up." [laughs]
Let's talk about the PIA, or personal information agent for a bit. It was a big success with people, as you mentioned... why do you think that was the case?
Well, people go crazy over smart robotic persons. And this smart robot was connected to the world's knowledge through Google. He's basically a representative of the Google AI. So he makes a great comrade, information seeker, or drinking buddy. You can play cards with him, let him do the groceries shopping for you and so on. And of course, he'll find your lost things, but that was more of a gimmick we wanted to have. The "search" gimmick, if you will.
Some said that for you to give away robots for free is destroying the robots industry at large.
We really don't think so. I mean, we want to serve the user, that's our main goal. We want to help people. And the PIA really was a huge step in that. Also, developers of any company can write their own add-ons to the PIA, so it's more of an open standard for everyone... something we all can benefit from. Commercially as well.
You were once quoted as having said, "Asimov's three rules suck." What did you mean by that?
You know, when we tried to implement our own "Don't be evil" algorithms into the first robots, we thought that maybe Isaac Asimov was onto something with his 20th century sci-fi. That his three rules, "Don't harm a human" and so on, would be valuable. Turns out, the issue is much, much more complex. I can't stress this enough.
Can you give us an example?
When we first put our robots into kindergartens, for example, they were always becoming the total outsider. Basically every kid bullied them, they were totally non-aggressive to the point they were considered to be weaklings. It turns out, kids are a bit evil. So in order for our robots to perfectly fit in, they just had to adapt.
But you still have algorithms like "don't kill humans" and such?
It's really not that easy. The robotic brain is much too complex for such simple algorithms.
Does that mean it's possible a Google robot could kill a human?
Well, just take the incident of a robbery. Shouldn't our robots defend humans from the criminal, and use violence if necessary? But this is all just very hypothetical. I'd like to say that our robots "Do no evil", and leave it at that.
Another question on your robots, some complained they are recording what they hear. That they transmit information from everyday conversations back to the Google machines.
First of all, it's true we do record chatter. But really, all I can say is there's the "no index" sticker. Put it on the door of any room and the Googlebot will stay outside.
Do personal information agents also record what they hear?
Only for data processing purposes, or general analysis. We do not make chatter public if it's recorded by a personal agent. It's a bit like the difference between Google Desktop...
... and Google web search?
Exactly.
Larry, can you tell us a bit about the 50/50 rule?
Sure. It's a bit of a historical number. Back when we started out we had an 80/20 rule. Some of yo may still remember this. People were given the chance to pursue whatever projects they were interested in, 20% of the time. We soon realized that these more private projects are the most interesting to us, the most commercial ones. So now, we apply the 50/50 rule and everyone can work 2-3 days a week on whatever interests him or her. We're actually considering a 20/80 rule for our engineers...
Wow. I'm sure that makes Google Inc. an attractive place to work at for engineers.
Absolutely. They love it, and we love what they're doing. Google really is all about humans. And robotic humans, of course.
Last year, there was quite a bit of a stir when your first space probes arrived back on earth. Until today, you were quite hush hush about the results. Do you have any announcements to make today? [laughs]
Ahh, sorry! But really, we're still analysing the data. It's a complex issue. We found some things we're not really supposed to talk about at this moment.
What, alien life...?
Again, at this moment we can't talk about it. We plan on letting the world in on our findings during the next few months.
Google is still a lot about secrecy. Do you think it's your main competitive advantage?
It's certainly one of them. We didn't introduce the employee brain chip for trivial reasons. We wanted to make sure, very sure nothing gets out.
To explain to those of us who didn't hear about the brain chip. It will prevent Googlers from talking about company internals, right? Can you explain a bit?
It's one of our patented technologies. Basically, whenever an employee aims to communicate memory structures which have been grown inside the Googleplex to outside people, we tap the brain's pain center...
... which will result in a loud squeak, rather than the secret being communicated?
Or something like that. It's quite painful... when Sergey was still around we used to play games trying to make the other think of Google in public, which would always result in a painful energy blast. The only reason why I can talk to you right now about internals really is that as a co-founder, I reserve the right to turn this thing off.
And we're happy about that. Onto a different topic, the Google Translator. Do you think the work is done now in this area?
Well, we managed to translate every human language into every other human language... so yes, we kind of solved this one. Of course that doesn't mean we can't optimize the algorithms. But really, this system learns on its own, adjusts to modern slang and so on. The most important thing to us really was to increase our index of documents. Once we had the power to automatically translate everything into the nearly 7,000 languages our translation tool speaks, that dramatically increased our index. Of course, that alone would be meaningless without good ranking mechanisms.
What were your feelings about putting a whole profession out of business... human translators?
We really feel for those people, but I think this is a natural process of civilization. When the camera was invented in the late 19th century, that put painters out of business. We as humanity have to live with these things. And I'm sure many translators found new jobs already, in fact, many are working with us to do quality checks on translation quality.
AskYahoo really had some improvements to their ranking algorithms as well. Do you think they can compete?
We have our own metrics for measuring this. We really don't look at what others are doing that much.
I would like to talk about the beginnings of your company. Some historians argue that the Google as we know it today was really born during the second dot-com boom and bust in 2006, which destroyed many tech companies but also allowed you to buy quite a few start-ups.
Well, I ...
[At this moment, Larry is interrupted by a voice in the background. The holo-conference transmission gets garbled. The last thing we hear is, "Larry, the results from the space probes are in. You won't believe this..."]