Overseas Travel Report

Christabella Irwanto

Events covered in this report:

Boston Hacks 2017

On the weekend of 29 April, Boston University hosted hundreds of hackers in Metcalf Hall for their annual hackathon.

Hall Metcalf Hall

BU student organizers

Pinatas Late night pinatas

Late night pushup session

Awesome American breakfast buffet spread the next morning -- egg squares, bread pudding, bagels, roast potatoes, and sausages.

Team

At the event, I teamed up with John, a junior from Northeastern University and Abdul, a freshman from Clark University.

Abdul, John, and I hard at work

Hack

AR can be broadly split into two categories: marker-based and markerless. One way the latter can be achieved is by constructing a 3D map of the surroundings from a robot's visual input while in motion, through methods such as SLAM (Simultaneous localization and mapping) or more recently, DSO (Direct Sparse Odometry).

We originally wanted to interface this open-source GPL-licensed DSO implementation in C++ with a live camera feed, but we ran into many hydra-headed problems over the course of the hackathon:

While we finally did get the C++ project to build (and learnt a lot of things along the way), we did not have enough time to do anything notable with it. So for the hack, we instead resorted to... Markerless AR.

the hack Tada!

I quickly made an AR application made with Unity and Vuforia, which I learnt during 50.017 Graphics in Term 7. This was an excellent opportunity for me to refresh what I had learnt, because I soon realized I had just gone along the graphics in-class lab by following the instructions step by step without even understanding what the steps meant. But now after doing it a second time and explaining the steps to my teammates, I can confidently say I understand the steps involved in making a marker-based AR app.

Developing the application in Unity

The app superimposes a spinning Boston Hacks medallion on markerless markers (a piece of paper with the word markerless written on it) within the camera's viewing frustrum. Obviously, it's really not markerless, but marker-based. It's supposed to be funny!

the demo Explaining and demo-ing our hack to other participants, visitors, and judges.

the hack Listening to another team's demo.

The hackathon had all sorts of hacks including VR, websites, and mobile (iOS and Android) apps.

While the average technical difficulty of the hacks was not as high as that of other hackathons I'd been to, I was very inspired by the dogged persistence of some of the beginner hackers I met, who kept at it despite not even fully understanding the code they were writing. That's really more than I can say for myself at times. And after all, we really are all just beginners when compared to the legends out there, and where we will be in 10 years will depend less on where we stand today, and more on how much we do everyday from now.

Silicon Hacks 2017

This hackathon was held in 42 University in Fremont, California, a completely tuition-free university offering a computer science education funded by a French billionaire by the name of Xavier Niel (whose first name happens to coincide with that of Professor X, who also founded Xavier's School for Gifted Youngsters... 🤔).

There were 1,042 iMacs at the campus. Yes. 1,042. The current student population doesn't even hit a few hundred!

Late-night cup stacking competition.

Food for days.

Days.

This hackathon was a little special because it was particularly civic-minded. The themes were: Education, Healthcare, and Civic Engagement.

Team

I formed a team on the day of the hackathon with Aatif, a senior from San Jose State University, Clarence, a freshman from UC Berkeley, and Tianjiao, a freshman from Diablo Valley College.

Tianjiao, Clarence, me, and Aatif taking a group photo with the organizers.

Hack

Stereotyping, prejudice, and discrimination are the root of systemic flaws in our society. Racism, sexism, and many other forms of discrimination continue to be serious human rights challenges all over the world.

Studies have shown interpersonal contact is one of the most effective ways to overcome prejudice. If someone has the chance to interact with people with different backgrounds, it has been proven to open their minds to different points of views.

We created a video chat web app using WebRTC and Node.js that takes your personal information and personal interaction history and matches you with someone with a different background from you. We also used IBM Watson Personal Insights on Twitter data to help pair people with similar interests.

It allows the user to grow beyond their social bubble and interact with people from other groups they would not ever have come into contact with otherwise.

Giving a demo to a judge.

As a result, we won first prize in the Civic Engagement category, as well as a brand new Google Home and a shiny MLH (Major League Hacking) medal. Yay!

Google I/O 2017

Google I/O was a fun, colorful, lively, and awe-inspiring 3-day spectacle of creativity, ingenuity, and passion. Great music was in the air, people were warm, and even the weather was rooting for I/O 2017's success. In spite of the whopping 7000 attendees, the seamless logistical planning -- shown in the I/O mobile app, session registrations and seatings, efficient food distribution, restroom availability, transport etc. -- allowed me to enjoy every moment without being hampered by trifles such as having to wait in line to use the restroom. I not only got to learn about and get some first-hand experience with technology being used at and developed by Google, but also had great conversations and made friends while standing in line, eating at mealtimes, taking a shower, biking home along a highway at night, and more. I even convinced some of them to pose in the shape of the letters SUTD:

Do you see SUTD?

A more normal photo.

Google Home, Google Assistant, and Actions on Google combined to create a voice-controlled, mocktail-making assistant.

Conference goodie bag.

Every participant got to bring a Google Home home.

Hundreds of Google bicycles are available that can be freely taken and used.

Took one for a ride around the bay.

Favourite meal at I/O: Menlo Park Sandwich! Tender steak and blue cheese in a Dutch cheese roll. Food for days. Breakfast table. I/O breakfast.

Sessions

There were 7 stages + 1 amphitheatre, so for every timeslot there were up to 8 sessions running concurrently. Good thing they were all livestreamed and quickly put online with closed captioning! Some of my favorite sessions were...

Keynote and Developer Keynote

The keynotes and some talks were held at the main amphitheatre.

Both keynotes were flawlessly executed, with bug-free demos, seamless transitions and fresh content that made 2 hours go by in a snap. Some notable highlights were:

An impressive demonstration of a highly scalable and high-concurrency web app where people can blow bubbles with their phone and view those bubbles in AR.

The Big Web Quiz

The speakers made a slick and shiny web app... to hold a realtime multiplayer quiz about web apps! I learnt about all sorts of funny web idiosyncracies from HTTP cache headers to SVG animation quirks.

Progressive Web Apps with Frameworks

This talk convinced me that progressive web apps were going to be the future standard for web apps, and explained how it would play a part whether you are developing for React, Preact, Vue.js, Polymer, and so on.

Codelabs

Codelabs were a great way to get up and running with Google's latest technologies and truly understand them first-hand. (Also, if you did 4 codelabs you could enter a raffle for next year's I/O tickets!) Anytime I ran into problems (e.g. an inexplicable bug in amp-bind, or an error while retraining in Tensorflow), there were lots of Googlers around to ask for help from (even though there usually wasn't someone from the project I needed help in around).

Tensorflow for Poets

I trained and ran a classifier for flower images, all on a Tensorflow Docker image, in half an hour!

The model was retrained from an existing model already trained on another similar problem --this is called transfer learning. We used a huge image classification model called the Inception v3 network, which was trained for the ImageNet Large Visual Recognition Challenge 2012, with millions of parameters for 1,000 different classes, and retrained only its final layer, for 4 flower classes. We can do this because although the Inception v3 was not trained on these flowers specifically, the information needed to distinguish between 1,000 classes is also useful for distinguishing other objects. So we can use that information as input to the final classification layer that distinguishes the flower classes.

Tensorflow graph for the Inception v3 network Illustration of transfer learning

AMP Foundations ☇

Accelerated Mobile Pages (AMP) is a way to build web pages that render with reliable and fast performance. Through this codelab, I learnt what it takes to make a page an AMP, through practices such as inline CSS in style amp-boilerplate> and <style amp-custom> for customized CSS, bidirectional linking with a canonical page, etc.

Beautiful, interactive, canonical AMP pages

After understanding the underpinnings of AMP, I moved on to creating an AMP page with advanced functionality. AMP Start offers many and resources such as quick-start templates and interctive, cross-browser, responsive UI Components (I used Responsive Menubar and Fullpage Hero), to help developers create stunning AMP pages. The CSS styles that AMPStart I used npm serve to serve static files, an alternative to Python's SimpleHTTPServer and others. They come with opinionated minified CSS in <style amp-custom> that offers classes for you to use such as px3, mb2, ampstart-dropcap, etc., building on top of low level CSS toolkit Basscss (which provides the classes like px3, mb2) by adding AMP Start specific classes.

AMP HTML components are custom elements with AMP-specific tags, and they make common patterns easy to implement in a performant way, such as amp-sidebar, amp-accordion,amp-carousel, and amp-instagram, and built-in components like amp-video, and amp-img. I learnt about useful attributes of amp-img such as src-set and layout="responsive", and using width and height just to specify aspect ratio. Other than presentation components that affect layout, there are also components for logic such as amp-selector that contains arbitrary HTML elements and adds selected attribute to element when tapped.

Finally, with some experimental features amp-fx-parallax and amp-animation I had a polished responsive page in absolutely no time at all!

Getting Started with Cloud Shell & gcloud

This was a quick tutorial where I learnt to use the CLI gcloud in the Cloud Shell of a new Google Cloud project.

Sandboxes

Accessibility

By watching Dirk of Google Accessibility demo the new advanced accessibility audit in Chrome Devtools, I also picked up some other handy tips such as Scroll to section and Add attribute in addition to the new accessibility features such as being able to view the entire tree of information that is exposed to the screenreader, and the cascade of these attributes, as well as being able to jump to the lines of code causing a specific accessibility error.

New Accessibility Tree feature in Canary, the bleeding edge Chrome release.

I also got to know some new frontend tools with accessibility built-in that was popular among Googlers and which I could use as well, such as Angular Material, and Polymer components like Paper components, which is the Polymer implementation of Material components for the web, which is vanilla and can be used with any framework.

Mobile web

An enlightening chat 💡 with Sergio Gomes of Google Mobile Web made me a lot clearer on Progressive Web Apps -- a gold standard for web apps to adhere to, with optionally an Add to Homescreen function by including a suitable Manifest in the web app source

I also spoke to Erwin Bombay and clarified AMP with him as well -- that AMP is not a browser implementation, but rather a Google project that is enabled by JavaScript and Google/Cloudflare/etc.'s AMP cache.

Experiments

People around the world have made beautiful and creative projects with Google's AI, Chrome, etc. I was blown away by many of these projects including VR ping-pong with your head movements, a multi-player realtime game where you need to become one with a neural network trained on thousands of doodles in order to succeed, and a super groovy musical-visual spectacular that takes input from the camera and spits out sick verses to a sweet electronic music backing and technopop-py visuals.

Google Cloud and Machine Learning

Playing whack-a-mole with Kubernetes pods trying to bring down the incredibily resilient distributed server!

A single Tensor Processing Unit! It was a lot bigger than we expected.

Sorting candy with machine learning!

Playing a piano duet with a program trained on an RNN!

˘

Office Hours

Different Google teams, from Accessibility to Machine Learning, set up Office Hours where anyone could come and ask questions.

Polymer and the Web Team

I was curious about the state of the mobile web with respect to access to other native apps, and had been asking a few Googlers but none fo them were too clear on it, so I decided to head to the Office Hours hall and speak to the Web Team. Paul Kinlan was able to give me a very satisfactory answer. In short, due to vulnerability and privacy concerns the Web Peoples have yet to draw up even any specifications for web applications to access native functions such as the alarm clock or sensors (e.g. lower-level gyrometer/accelerometer readings unlike the current DeviceOrientation's higher-level alpha, beta, and gamma, which Intel etc. are currently working on), but they are working on it!

Conclusion

Since it was my first time attending a conference, I made sure to diversify my experiences and do a bit of everything: talks, sandboxes, office hours, codelabs, and just plain ol' networking (over food, games, and other fun activities). This was definitely a good strategy and I am happy I got to have all sorts of different conference experiences, although I would perhaps prioritize some things over others next time! For example, I would come prepared with a lot more questions and problems and present them to Googlers to pick their brains on it, because I found those situations particularly exciting as well as illuminating when we were having discussions about a particular issue or problem I brought up. Definitely I would get more one-on-one time versus attending talks that are pre-recorded anyway. But of course watching in person is completely different from watching it on YouTube, and I would not miss attending the really interesting talks in person either, especially the interactive and multimedia ones!

Some of the most impressive, polished, and engaging technical talks I've ever watched were sessions at Google I/O 2017, that definitely set a new bar for what I think presentations should achieve and what I should personally strive for. I also got my feet wet in building AMP pages, progressive web apps, and a Tensorflow classifier, among others, in an exuberant and conducive environment that accelerated my learning. I met a lot of incredible people, many of whom were building products that I was actually using, from Google products to other products built on Google such as Fabulous and Nova Launcher. I'm so glad and grateful for this opportunity to attend Google I/O.