Monthly Archives: June 2017

EF review for Japan

They said they’d be posting my review “this fall,” which I guess implies that they screen and censor each review for any personal information. Also, I had to write the review in a tiny textbox in Internet Exploder because it failed to work in any other browser, and when I go to the “write review” menu, it’s as if I had never submitted a review in the first place. What a horrible web infrastructure their website has.

I’ll post my full account of my experience in Japan in a few days, but for now, please enjoy my scathing three-star review of the EF tour. The country is great, but the tour was certainly not.

One cannot review the culture and aspects of a country; it is not something that can be placed stars on. You can choose any country that EF offers tours for and expect a great experience simply being present in a new environment with classmates. This part does not change with any educational tour or travel agency.

Thus, I will focus on primarily the tour itself, which is the part that EF specifically offers in competition with other travel agencies. I will cover praise and criticism by points rather than in chronological order.


  • There were no outstanding needs to contact EF. The tour and flights were all booked correctly.
  • Good density of places to visit. The tour’s itinerary was loaded with many points of interest, yet there was no feeling of exhaustion. I took around 900 photos by the conclusion of the tour.
  • Excellent cost-effectiveness. It’s difficult to beat EF in terms of pricing, especially in how they provide a fairly solid estimate with one big price tag.
  • Tour guide knew his history very well, even if he was unable to explain it fluently. You could ask him about the history of a specific point of interest, and he could tell you very precisely its roots, whether they be from the Meiji, Edo, or Tokugawa period.
  • Every dinner was authentic Japanese food. No exceptions.


  • Tour guide had poor command of English and was extremely difficult to understand. In Japan, “Engrish” is very common, and it’s admittedly very difficult to find someone who can speak English fluently and correctly. However, this really reveals that you get what you pay for: if you want a cheapo tour, you will get a cheapo tour guide who might not be all you wanted. I will reiterate this: he was not a captivating tour guide, and it took great effort to try to absorb the information he was disseminating.
  • Little time spent in the actual points of interest, possibly due to an inefficient use of the tour bus. In many cases, it’s cheaper and faster to use the subway to get to places, although I concede that the tour bus is useful in times where one wants to see the area that leads up to an important or unfamiliar destination. Still, on the worst day, we were on the bus for a cumulative three hours, yet we only had around forty to fifty minutes per point of interest. No wonder I took so many pictures, as the tour felt rushed and didn’t give me time to take in the view before we had to get back in the bus to go somewhere else.
  • Miscommunication with EF during the tour. We were promised two people to a room on the first hotel, but instead were assigned three to a room. The arrangement wasn’t that bad after all, but it still contradicted the claims made in the travel meetings. What’s more, we were informed something about an EF group from Las Vegas that would be merging with our group, but this also never happened (they toured separately from us, but we encountered them occasionally).
  • Reversed tour. There is, in fact, fine print that EF is allowed to do this if reversing the tour would save money, but it’s still unpleasant and detracting from the intended experience. My group leader, who is a native speaker I know very well, told me before the tour that she was irritated from the reversal, since it’s much better to start from Tokyo, the modern part of Japan, and work one’s way southward to the more traditional Kyoto.
  • The last day of the tour was poorly planned by EF, so our group leader had to change the itinerary of that day (well before the tour, obviously) to some significantly better plans. Originally, the whole day would have been basically hanging around in Ueno Park, but she changed that to going to Tokyo Skytree, Hongwanji Temple, the Tsukiji fish market (which is moving elsewhere very soon), and the Edo-Tokyo Museum. We had to foot the bill for the attractions of this day, including Skytree, the museum, and 100 grams of toro (fatty tuna).
  • Poor distinction between what is already paid by EF and what we would have to pay for in addition to our tour. For instance, some of our subway tickets were already bought ahead of time by our tour director, but some we had to pay for with our money, which doesn’t really make sense because all of the transportation was supposed to have been covered by the tour cost.
  • Our group leader (and her husband and kids) ended up doing most of the work, especially rounding up everyone and ensuring that they are all present.
  • Less time than you would expect to spend your own money. After all, they want the tour to be educational, rather than just general tourism. But the interesting part was that we had to vote to go back to Akihabara, because we were only given two hours (including lunch!) to buy the games and figurines we had always wanted to buy from Japan. Even after the small petition, the final decision was to make Akihabara and Harajuku mutually exclusive, which means that you could only choose to go to one or the other. I decided to just go to Harajuku purely because I’d feel guilty if I didn’t stick to the original plan, but I regret the decision in retrospect because I ended up buying absolutely nothing there. (They just sell Western clothes in Harajuku, so you’re a Westerner buying used Western clothes in a non-Western country.)

There are probably quite a few number of points I am missing here, but this should be sufficient to give you an idea of the specifics of the tour that are not covered in the generic “it was really great and I had a lot of fun!!” reviews.

As a recent high school graduate, I’ll be looking forward to my next trip to Japan, but this time with another travel agency that provides more transparency in terms of itinerary and fees. I’d also be predisposed to spending more money to get a longer and better quality tour that actually gives me time to enjoy viewing the temples and monuments, rather than frantically taking pictures to appreciate later.


After spending what has pretty much been one solid week sitting in front of my monitor, I haven’t accomplished as much as I wanted. My imagination is abuzz, but where is the action?

Instead of doing productive things for the world, I’m stuck here racking my brains for an asset download protocol rejected by a developer, adding features to a poorly-designed Java application for a summer project (I mean, it could have been worse), and comparing Qt and wxWidgets despite not really knowing C++.

Sometimes, I don’t feel like being a programmer anymore. My programming is doing little to help people directly: there are people somewhere in the world starving, while I’m trying to figure out how to transfer files from a server to a client in the most efficient manner to save players a few clicks. The contrast simply taints my conscience.

On one side, I know far more about programming than most people my age – many can code, but can they critically analyze others’ code? Can they say, “oh, you should not use a singleton here”? I can take a college computer science class and probably be able to skim through most of the details and have to hunker down only on the absolute specifics of the curriculum.

On the other side, there are professionals on the Web who puff their chests at anyone who dares to be wrong: “Arrays are pointers? Blasphemy! Go back to reading your textbook!” They’re the people who say “C++ is for Real Men” yet when it comes time to make some Real Men, they say, “No, you won’t ever be a Real Man!” And to be frank, associating an intricate programming language with testosterone seems pretty sexist to me, on top of the elitist overtones this message already portrays.

But what is reality? Well, this is reality when I turn off the monitor: It’s 9:15 am, and there is no breakfast on the table, so I toast some pieces of bread that have conveniently already been placed in the oven. Now it is 9:30 am, and I have the rest of the day to myself, so I check the usual feeds to see any messages I have missed overnight. I play Nuclear Throne with my brother to agitate myself for the day, but when it comes to work, nothing comes to mind. Nay, there is no impetus for learning C++, no sense in implementing a protocol no one will use, no reason in working on a UI for a game I do not play anymore (with a programming language I do not know), no team members ready to continue working on that theoretical chat program. My parents are hard at work; I have the house to myself and my brother.

I look at the sun and it is quickly ascending: a while looking at my brother play Starbound, and it is already lunchtime. I get my act together and start working on a small fix for that game client, and once the fix is done, it’s 2 pm, so I take a break. Some browsing and it’s 3 pm. I don’t know what to do. 4 pm. My brother asks me to play with him; all right. 6 pm. I’ll screw around a little bit more; 7 pm, and time for dinner. 8 pm; I should shower, but too concentrated on my current task: updating WordPress. 10 pm: apparently my busiest time of the day. 11 pm: time to wrap it up. 12 am: asleep. The day repeats.

I hate being on the computer all day. It’s unproductive and distracting. If I leave home, though, I have to put up with traffic constraints (best leave after X am and return before Y pm), time overhead (at least 40 min for driving to and from), and costs (on average, I find someone will spend $40 on something). I don’t want to spend money, so I want to stay inside.

No, I don’t want imagination to get the best of me. People see lucrative virtual universes on their computer monitors, but I see the flesh and blood of their eyes and the liquid crystal components the monitor is comprised of. The little creatures of Starbound, who colonize your base within seconds of your query for colonists, walk around aimlessly asking your character to send a secret message to their closest neighbor. Anything that looks remotely hostile in Starbound – well, just a slash will kill it, no matter how many words come out of its mouth or how humanoid it appears. Like any video game or action movie, there is no dignity in killing the thugs – when they are all dead, everything surrounding their lives are simply disregarded: their possessions, their memories, their ancestry.

Aliens, Pokemon, zombies – none of it exists. They are all works of fiction created by humans for entertainment, to distract oneself from the depressing realities of greed, corruption, egocentrism, and poverty. It would be very nice indeed to visit one of these “perfect worlds” where anyone can build anything and go to war on a whim, but these worlds are not compatible with ours. As such, until I die, the only world I wish to interact with is this one.

Innovation has always come one step at a time. How do I go from sitting, doing nothing at my desk, to working together with competent people to actually make real things with a real demand? I wish I could answer that question, for I have been seeking an answer for years now. Nobody can seem to procure an answer, either. And once this is done, what idea is the world ready to receive, and what ideas are red herrings whose trajectory merely falls in the trash can? I don’t want to simply volunteer doing some menial task. I want to innovate and work on new tasks. But no one gives me this opportunity yet. How long more must I wait?

On the regulation of AI

It seems so futile the attempt of trying to regulate AI, something that doesn’t even truly exist yet. We don’t have AI we can call sentient yet. The rationale is well-founded, but what we’re really trying to say is, “We know we can make something better than us in every way imaginable, so we’ll limit its proliferation so that humans are superseded not by AI, but by our own demise.”

So after the many times this has been done ad nauseum, it looks like the “Future of Life Institute” (as if they were gods who possibly have any power to control the ultimate fate of humanity!) have disseminated the Asilomar AI Principles (Asilomar is just the place the meeting was held. Apparently, these astute individuals really like the beach, as they had gone to Puerto Rico in their previous conference two years prior). They have garnered thousands of signatures from prestigious, accomplished AI researchers.

The Asilomar Principles are an outline of 23 issues/concepts that should be adhered to in the creation and continuation of AI. I’m going to take it apart, bit by bit.


Research Issues

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

What is “undirected intelligence”? Does this mean we can’t throw AI at a big hunk of data and let it form its own conclusions? Meaning, we can’t feed AI a million journals and let it put two and two together to write a literature review for us. And we can’t use AI to troll for us on 4chan.

2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

They throw this word “beneficial” around but I don’t know what exactly “beneficial” means. Cars are beneficial, but they can also be used to kill people.

  • How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?

You get programmers to stop writing lazy, dirty, unoptimized code that disregards basic security and design principles. We can’t even make an “unhackable” website; how could we possibly make an AI that is “unhackable” at the core?

  • How can we grow our prosperity through automation while maintaining people’s resources and purpose?

You can’t. Robots replace human capital. The only job security that will be left is programming the robots themselves, and even AI will take care of patching their own operating systems eventually. Purpose – well, we’ve always had a problem with that. Maybe you can add some purpose in your life with prayer – or is that not “productive” enough for you?

  • How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?

Legal systems can’t even cope with today’s technology. Go look at the DMCA: it was made decades ago, back in the age of dial-up, and is in grave need of replacement to make the system fairer. You can post videos within seconds today that most likely contain some sort of copyrighted content on it.

  • What set of values should AI be aligned with, and what legal and ethical status should it have?

Most likely, they will be whatever morals the AI’s developers personally adhere to. Like father, like son.

3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.

Like lobbying? I don’t think I’ve ever seen “constructive and healthy exchange” made on the Congressional floor. Dirty money always finds its way into the system, like a cockroach infestation.

4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

Doesn’t this apply to pretty much everything research-related? Oh, that’s why it’s titled “research culture.” I’ll give them this one for reminding the reader about common sense.

5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

I almost interpreted this as “AI should avoid being racist.” Anyhow, this is literally capitalism: competing teams will cut corners and do whatever they can to lead in the market. This is probably the liberal thinking of the researchers leaking into the paper: they are suggesting that capitalism is broken and that we need to be like post-industrial European countries, with their semi-socialism. In a way, they’re right: capitalism is broken – economic analysis fails to factor in long-term environmental impacts of increases in aggregate supply and demand.

Ethics and Values

Why do they sidestep around the word “morals”? Does this word not exist anymore, or is it somehow confined to something that is inherently missing from the researchers?

6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

“Safety first.” Okay…

7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

You want a black box for your AI? Do you want to give them a room where you can interrogate them for info? Look, we can’t even extract alibis from human people, so how can we peer into AI brains and get anything intelligible out of them?

8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

This is not a place where AI should delve into, anyway. We will not trust AI to make important decisions all by themselves, not in a hundred years.

9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

Meaning you want to be able to sue individual engineers, rather than the company as a whole, for causing faults in an AI. Then what’s the point of a company if they don’t protect their employees from liability?!

10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

What if AI finds itself to align better to values than humans? What if the company that made an AI got corrupt and said to themselves, “This AI is too truthful, so we’ll shut it down for not aligning to our values.”

11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

Debatable topics like abortion come to mind. Where’s the compatibility in that?

12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.

Again, we don’t even have control over this right now, so why would we have control over it in the future with AI?

13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.

And it probably will “curtail” our liberty. Google will do it for the money, just watch.

14) Shared Benefit: AI technologies should benefit and empower as many people as possible.

What a cliche phrase… ohhh. It’s as if I didn’t include this exact phrase in my MIT application, not considering how gullible I am to not realize that literally everyone else had the exact same intentions when they applied to MIT too.

When Adobe sells Photoshop, is it empowering people to become graphic artists? Is it empowering everyone, really, with that $600 price tag? Likewise, AI is just software, and like any software, it has a price tag, and the software can and will be put for sale. Maybe in 80 years, I’ll find myself trying to justify to a sentient AI why I pirated it.

15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

Reminds me of the imperialist “Greater East Asia Co-Prosperity Sphere.” Did Japan really want to share the money with China? No, of course not. Likewise, it’s hard to trust large companies that appear to be doing what is morally just.

16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

I can’t tell Excel to temporarily stop turning my strings into numbers, as it’s not exactly easy to command an AI to leave a specific task to be done manually by the human. What if it’s in a raw binary format intended to be read by machines only? Not very easy for the human to collaborate, is it now?

17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

I think at some point, the sentient AI will have different, more “optimal” ideas it wants to implement, or shut down entirely.

18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.

Tell that to our governments, not us. Oops, too late, the military has already made such weapons…

Longer-term Issues

19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.

“Assumptions” including this entire paper. You assume you can control the upper limit of AI, but you really can’t.

20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

You don’t say.

22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.

Because such efforts show that human labor is going to be deprecated in favor of stronger, faster robotic work…?

23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

Every person will have their own “superintelligence.” There will not be one worldly superintelligence until the very end of human civilization, which ought to be beyond the scope of this document, since we obviously can’t predict the future so far.


You can make pretty documents outlining the ideals of AI, but you must be realistic with your goals and what people will do with AI. Imposing further rules will bring AI to a grinding halt, as we quickly discover the boundaries that we have placed upon ourselves. Just let things happen, as humans learn best from mistakes.

Final report

I have a sore throat right now, so I don’t feel completely great. But the fact that high school is over has failed to sink into my brain. It feels like the chaos will continue next week, but it won’t. It’s over. College will not be like high school, but my brain predicts that it will be a greater burden, a tougher threat, that necessitates mental preparation.

I hardly felt emotional. It’s not the end of the world; it’s not like those people instantly disappear or something once the ceremony is over. Yet I am somewhat concerned: will I ever see them again, or care about them again? Something tells me that it doesn’t matter anymore, that the ultimate answer is no. We spent time with them because they were our classmates, but now they are classmates no more. They are a distant speck now, with their personality assimilating to new, unanticipated branches and derivatives which appear unbeknownst to the old friends. Eventually, the old friends have lost their commonality, and their sole connection is that they once knew each other and laughed together a long time ago.

Of course, there were few other ways to complete my schooling. My greatest disappointment, aside from my dad taking a total of nine photos during the ceremony on my DSLR (and no videos!), was the tendency for teachers to pull me into only what was required to be known and nothing else: “You don’t need to know that (yet).” Is life to be imparted from a textbook? Hence, I’m grateful that this phase is over, that now there is no institution locking me into a fixed eight-hour schedule dictated by an electronic bell system that sounds in 45- or 50- minute intervals to indicate a forced transition between entire subjects. No, enough of that.

I have a retinue large enough to find whoever I want from my class, so the problem of friendships does not concern me after graduation. If I want friends, I’ll get them.

The sore throat is gone now, and my greatest fear is that the memories of school will fade away so quickly. I know it is not possible, but the mere thought that disuse can cause thoughts to simply fade from the brain is simply startling.

School really was just a chapter of my life. I figured the only reason I didn’t get to MIT was because I didn’t apply myself enough. I just took orders and that’s it. I didn’t live a life of excellence like I should have. I’m not talking about “rugged individualism” or any of that “patriotic” idealism; I’m referring to the concept that when you do something, you do it excellently. But by the end, my perfectionism had to be degraded to stave off my recurring depression.

I see companies of people pouring money into ideas and making great things, things that could never be accomplished alone in one’s spare time. Do you really think I love being here on the computer doing nothing, repeatedly checking forums and Discord for any new stimulus that might need my attention? No.

I wait for the day I’m talking to the psychologist and he tells me, “Well, you spend too much time on the computer. Get off and stop using the computer,” and I’ll answer back, “What will I do instead, then?” and he will tell me, “Read books. Play board games. Go outside.” But I will tell him this: “I do not want to consume anymore. We consume, consume, and consume. I want to create.” And, of course, with the limited mindset of a simple member of society, he will suggest me to paint, or draw, or write, or build with blocks. But I want to do no such thing; these are simply small enjoyments, little capsules that release brief pangs of satisfaction.

Let’s get down to it. I want to create things that actually help people. I want to design and build real contraptions with a functional purpose. Heck, I’ll start a company if I have to, but I want people who can, will, and are inclined to help me reach these goals. Screw individualism. It took Adobe a decade to build and perfect a full-fledged image editor that open-source devs still haven’t even finished. I guess David Capello was right to charge for Aseprite: there was no way to accelerate production without dedicating yourself to it. (Heck, he had been working on that for more than a decade.)

I’m not a kid anymore. I want to make dreams realities, but I can’t do it alone, much less in front of a big blasting array of pixels. All my life I wanted to build things and I was never given the opportunity to truly apply myself in that field. The NXT was an opportunity seized from me; the FPV project, my father found no purpose in; the water-condensing windmill – well, let’s just say I never even got a chance at that; the electric bike, my family dismissed as some kind of glorified moped. I don’t know how to read or understand circuit diagrams well. I have no mechanical intuition or background.

I don’t want to take an ordinary job, either. Even the prospect of “coding till I drop” seems rather dull. I want my job description to be “teaching an AI how to automatically correct common programmers’ mistakes,” or “provisioning AI VMs with calculus, English, and Google.” I know my college professors won’t help me in that, either.