Google Developer Days India 2017 – Day 1 (Track 1)


Good morning, everyone. I work
in Google’s office right here in Bangalore. I lead our
engineering teams in India and Singapore for a very important initiative called
Next Billion Users. I’m really excited to be here with you here
today at Google Developer Days, because I myself am a
developer. I’ve used many Google APIs over the years. This is
the first GDD ever in India, and this is the largest ever
developer event Google has held in India – ever!
[Applause]. But first, I want to talk to you about why and
what Google is doing in India, and in emerging markets all
around the world. Today, Google has seven products with more
than a billion users each. Around the world. But we know
that our next billion users will come in different parts of the
world than our first billion. Our next billion users will not
come from US or Canada, or Germany. They will come from
India, Indonesia, Brazil, Nigeria, and similar countries
all around the world. That’s why in India alone we expect 650
million Indians to come online by 2020. That’s why a couple of
years back, we started the Next Billion Years initiative
at Google. We noticed that the featured users of Google’s
products will be different than our current users. Perhaps the
most important difference is that they’re going to be mobile
first, and, in fact, largely mobile-only. The smartphone is
their best computer, probably the best camera they’ve ever
owned, and probably their first video device that they ever
carried around. Let’s take a look at users in India,
Indonesia, and Brazil, and compare them to the US. The
users in these countries are incredibly young – I mean, just
look around at the attendees here. They are organising fast,
and they are as operational, with disposable incomes growing
rapidly. They’re savvy. A large fraction of them have pre-paid
plans and multiple SIMs to get the best voice and data plans.
They’re unique. They have a strong sense of their own
identity and culture that is different from the rest of the
world. And yet, we at Google believe that they have the same
fundamental needs. When they get online, they want to talk to
friends, they want to be entertained, they want to
understand the world around them; they want information to
make their lives better. If you saw search queries from Mumbai,
they’re not that different from search queries from New York
City. If you look at search queries from Mumbai, you might
find queries like, “What time does this train leave?” “Where
is the nearest doctor?” “When is the new movie with Deepa
coming out?” Right now, they face serious challenges. Serious
challenges of getting the information they need and having
a good experienced with the internet. They have low-spec
phones which are usually running an old version of Android, and
their storage is running out constantly they have serious
connection issues – data can be slow and intermittent. It can
sometimes take minutes to load a map and even more to buffer a
video. When they do manage to connect to the internet, they
find there is not much localised content. Let me give you an
example: Wikipedia in Hindi, which by the way is the fourth
most spoken language in the world, has just two per cent –
two per cent – of the content of Wikipedia in English. So,
Google’s approach to this is pretty simple: first, we want to
ensure that everyone has access to internet. The rest is
meaningless without a working internet connection. Second, we
want to build platforms that enable developers like
yourselves to build meaningful experiences for everyone. Third,
we want to build products that are directly relevant to our
next billion users. Let me start with access, which really is
the foundation. In India, we have partnered with product
trust and RailTel which is high speed Wi-Fi in hundreds of train
stations across India. It is the largest public Wi-Fi product
in India, with millions of people now using this service.
In addition to access, we also need to create awareness and
educate people on how to use the internet. Our internet Saathi
initiative is around 100,000 villages in 12 states across India. We have over 25,000
internet Saathis – these are women who have been trained to
help other women in the village learn about the internet and how
it can be used to better their lives. These Saathis have
trained over 12 million women over the past two years, and the
impact that the internet is having on these women and their
communities is truly incredible. Of course, Google cannot solve
all the needs of everyone everywhere by itself, so we want
to make sure that we make strong platforms that allow
everyone to contribute and grow with the internet. That ability to
participate and contribute to the internet is key. It’s what
in turn makes the experience better for everyone. While we
have lots more work to do, one thing we are proud of is our
support of many languages in our key platforms. At the end of
the day, Google is a products company. So we are working hard
to try to make our products fast, relevant, and accessible
in our users’ own languages. Last year, we launched offline
maps which lets people save a map over Wi-Fi and then use the
map just as they would if they were online. In 2013, we let
people take videos offline in YouTube, launching first in
India, as well as Indonesia and the Philippines. Now this
feature is more than 80 countries worldwide. So this is
the cool thing: we’ve learned that, when we tap into a user’s
insight – whether on how people connect, or how they overcome
constraints in any market – these insights tend to hold true
globally which allows us to make products better for
everyone. In this case, I myself use maps offline and YouTube
offline all the time. And, when the market needs it, we will
build products that are made for India first.
That’s exactly what we did a little over two months ago when
we launched Tez, a consumer entertainment and mobile app
that levers API that is a refreshing new experience for
users over India. Tez is made for India first, made to be as
simple to use as cash and provides Google-scale security
to our users. It has been out a little bit over two months, and
we’ve seen more than 10 million users creating more than 74
million transactions. If you haven’t checked it out, please
do so today. Just yesterday, we
launched another brand new product made for our Next
Billion Users first called Datally. It is a mobile data
director savings app which allows people get the most out
of their mobile data. Datally has three core features:
understand your data, control your data, and save data by
finding free Wi-Fi near you. Now, we are proud of what we
have achieved so far. But we are also aware that there is a lot
more work to do. And we’re just getting started. So I’ve just
given you a glimpse into what we are working on for the Next
Billion Users’ markets. I might be biased, but I truly believe
that India is an amazing place for technology. Fortune 500
companies, start-up hubs, entrepreneurs, dev centres –
they’re all blooming across India, and we have some of the
best talent in the world. I’m lucky enough to work with many
on a daily basis. That’s why we believe it is very important for
us to meet and work with all of you at events such as these. We
want to hear your feedback on our products and programmes so
that we can give you what you need to turn your ideas into
reality, whether you’re building for the next billion, like me
and my team, or you’re building apps for use all across the
globe. We want to enable you to focus on the problems that you
are trying to solve and minimise the pain points of building a
product. The Google Developers Team is on the ground in over
130 countries, and, within India, thousands of you are
participating at developer-experts and as part of
GDDs, and we are continuing to grow the India ecosystem
sufficient in protection such as Women Techmakers, the Agency
Programme, and launchpad. We’re also working on providing
trainings to deepen your technical capabilities
year-round. In fact, you might recall the goal that we’ve
committed to to train 2 million Indian developers – 2 million
Indian developers – by 2020. To date, we’ve engaged over 500,000
developers through our various various training programmes,
along with more than 1,000 faculty members from 400
colleges. Additionally, 11 state technical users have adopted
our Android developer fundamentals curriculum. We also
launched developer student clubs in 23 states, and they
have trained over 6,000 students. That’s in just three months. [Applause].
Finally, we have recently announced a partnership with
plural sight to offer – plural site to offer free IT content to
help skill 100,000 Indian developers through the mule site
platform. Plural site platform. India and we want to cultivate
the ecosystem here. Let me give you an example. Meet Jimit.
Jimit’s father is a street mechanic. Jimit always assumed
he would follow in his father’s footsteps. But he always loved
to code before after completing his six-month training to become
a mechanic, he asked his father, “Can I take some time to
pursue my real dreams of becoming a developer?” His
father agreed and then, from then, each day, he went to a
part of town where he could access public Wi-Fi. He sat on
this spot where the signal was the strongest and began taking
Android courses through Udacity. After he completed in training,
he began applying for jobs as a professional developer. And,
today, Jimit supports his family with the salary he’s earning as
a developer. Jimit’s story is just one of many inspiring
stories that motivate us, continue pushing towards our
goals to train as many developers as possible. And
build the products and platforms that are most useful to all of
you. Now, I would like to bring up some colleagues to share
updates on the products across our developer platforms let’s
get started with how we are continuing to improve the
Android vote process. Please welcome Dan, and thank you! suswagata!
. Good morning! It is the best time ever to be an Android
developer, and I can say that because I’ve been developing
Android for over nine years. I’ve been a Google pro for seven
of those years but never seen anything like what we have now –
this incredible confluence of meaningful developer changes
before we are seeing of more powerful tooling, a clear path
forward for app design, a new programming language, support
for on-device machine intelligence, and fundamental
improvements to the distribution model, and much of this change
derives from listening to all of you in our developer community. All of this is happening amidst
the incredible momentum that Android continues to have. We
are seeing 2 billion active devices on Android, and 82
billion apps installed from play. What is even more amazing,
it is how this momentum is making so many developers
successful. The number of developers with over 1 million
installs grew 35 per cent in the last year, and to leverage this
distribution to build great businesses, we expanded direct
carrier billing to reach 900 million devices with 140
operators. Altogether, the number of people buying on Play
grew over 30 per cent in the past year. But that’s not
enough. We know we can make distribution even better by
removing the friction from app installs and by making the
entire experience more dynamic. Instant apps is one of our big
bets to bring users to your apps and seeing great results. One
football who saw the number who read increased 55 per cent.
Vimeo increased their session durations by 130 per cent. 280
increased – and there are many more stories like these. At IO
we opened up all instant apps which means anyone is build and
publish an Android instant app. We’ve made instant apps
available to more than 500 million devices where
Google Play operates. It is downloaded feature by feature.
You organise your project into feature modules and you can use
the exact same code in your instant app and installable app.
We’re using the process of refactoring the app using the
new monthliarised factoring. It helps you move code and
resources between modules. We’ve included more – for on-the-wire
compression. When you’re ready, you upload your app in the play
console. To get started building an app today is the
GRKO instant apps. We announced that Kotlin is a fully supported
Android programming language, and the developer community
support for Kotlin was a huge driver of our decision to
embrace the ladies and gentlemen, but since that
announcement, we’ve seen a massive increase in Kotlin
activity. The number of apps in the Play store that use Kotlin
has grown three times, and we observed that 17 per cent of
Android Studio projects are now using Kotlin. Of course, Android
Studio 3.0 is released now and bundled with full support for
Kotlin, including Kotlin’s templates or activities, Lint support, and on the command
line but we didn’t stop there. We are building docs and content
around Kotlin. We publish the guides on github to provide
guidance provide style. We support library 27, making the
APIs friendlier and doing this while increasing our commitment
to the Java programming language. With support for
language features such as lambdas and referencing back to
any SDK version. Kotlin makes programming nor fun and
productive, while combining a dense syntax along with modern
features such as functional programming and the ability to
write DSLs. Now, minimising install friction with instant
apps and a Kotlin probing language is two of the ways we
have listened to your feedback. We’ve focused on speed, smarts,
and Android app support. You can see all the speed and smarts
behind me but I want to pull out one thing in particular: your
feedback has made thriving sync and build time
down. Since – config time dropped and
continuing to work on build performance. Android Studio 3.1
now in canary, you can find out D8, our new compiler which
outputs smaller files having the same run time performance. On
the emulators, we’ve added the Play for for end-to-end testing.
You will find awesome support, O system imaging improved
profilers and hundreds of helper tools. To download Android
build dependencies we are distributing through our own
Maven repository. You’ve asked us to make frameworks easier, a
better solution for life cycles. libraries for common tasks is a
stable release, the ViewModel pattern, data storage and
fragment life stages. We have preview for support paging which
makes it easy for huge data sets. App quality is a essential
piece to growing its business. We took a sample of apps that
analysed the correlation between app quality and business
success. What we learned is when apps move from average to good
quality, we see a six-fold increase in spend and a
seven-fold increase in retention. Quality is Queen. To
helping making sure you target the app, you can target specific
devices in the Play console. You can browse a catalogue if
you need a certain amount of RAM, or specific issues, you can
set targeting rules to address the address this as well. Prior
to excluding devices, you can even see your installs, rating
and details per device. We’ve also got Android vitals
dashboard in the Play console so you can see aggregate data
about your app to pinpoint common issues –
excessive crash rate, excessive wake-ups and more, enhanced
about I new profilers and instrumentation in the platform.
Speaking of the platform, Android OS so much for
developers. We are in preview for Oreo8.1 as well as
a new network API to build accelerated on device
application for machine learning, including recognition
and prediction. It has vastly improved font support,
notification controls and a new native pro Audi API.
We’ve made massive – and a series of optimisations to make
your app run smoother. We’ve introduced adaptive icons to
increase the launch experience. With Google Play Protect now on
every Google Play device. We’ve improved accessibility, autofill
and smart text selection, and wide gamut colour, and improved
multi-display support. I will be diving into this in more detail
later today. Let’s talk some of the ways we are spending
Android. ARCore uses motion tracking, environment mental
understanding. It is being offered as a preview so you can
start experiment, new experiences go give us feedback.
This preview is the first accept in the journey allowing AR
capabilities across the whole Android ecosystem. Android
Things makes connecting mass market products easy, making
them best in class and Google APIs making Google successful
and mobile on a platform. I’m excite ed to announce today, the
preview six was earliest. Android’s hardware is based on
system-on module architecture, the memory and other networking
components that can produce cheaply generic parts made in
large quantities. For each SOM, it’s provided as a board support
package. For production, you can build your own custom board
for the costs and simplifying hardware development. On the
software of things, you build a standard APK. Google provides
their update mechanisms so you can roll updates to your devices
and get security updates from people who maintain Android. You
can focus on your core business instead of having to worry
about patching kernels and Android.
Kotlin and Firebase, you can also leverage the power of
TensorFlow. It is still early days, we we are seeing
incredible growth in Android Auto, the number of compatible
cars to over 50 car brands. it’s well on becoming a standard
feature in every new car. We’ve made it available to all
Android users, putting up the platform ecosystem to many kind
of drivers no matter what kind of car they drive. During the
holiday season last year, Androidwear saw a huge growth,
doubled 12 brands to 24. The choice of Androidwear watches
doubled. 2.0 and its stand alone functioniality is taken
advantage of which allows them to work no matter which watch
they’re connected to. Finally, the strong partnership allowed
us to activate the number of AndroidTV devices last year, and
we expect that trend to continue. We are seeing partners
in the set-top box. We expanded our international footprint to
70 countries and now more than 3,000 Android TV apps in the
store. With so many ways for people to interact with Android,
the strong communities supporting Android investment,
the improvements in the platform, tooling, language, and
the distribution of Google Play, it is really the best time
ever to be an Android environment.
Please check out the trainings, sessions and code labs how we
can make Android great. All of the form factors are tapping
into the power of the Google assistant. My pleasure to welcome Sachit to the stage.
Thanks, Dan. Hi, everyone. The Google Assistant is available
across many devices from your phone to your TV. It is
available on voice-activated speakers like Google home. You
as a developer have the ability to leverage actions on Google to
build conversational experiences through the Google
assistant. But today, I’m going to tell you
about all the new features we’ve added to the actions on
Google platform to make your apps foreign the assistant even
more capable. You can build apps for all sorts of assistive-use
cases for voice and visual interfaces, like shopping for
clothes or ordering food from a lengthy menu. With UI elements
like image car carosels be – they can
seamlessly transition to get things done with your app. We’ve
also opened up powerful transactions experiences in the
US and UK, helping developers grow their business by making it
easy to complete purchases of physical goods and services
through the Google Assistant on phones. This can be done with
Google-facilitated payments or their own stored payment methods
for users who sign into their app. Speaking of sign-in,
there’s a seamless one-tap for listening the rewards account to
the assistant. Orders can be tracked, modified, or even
repeated using the transaction history view accessible in the
Google Assistant. But none of this matters if users can’t
discover your app. We’ve rolled out an app directory within the
assistant experience on your cell phone with, “What’s new”
and “what’s trending” sections which will change and evolve,
creating more opportunities for your app to be discovered by
users. We’re using your app’s description and sample
invocations to match users’ search queries to new task-based
sub categories of apps. Even watching a new For Families
about a badge to find apps appropriate for all ages. And,
to make find your app easier, learning from the directory and
other information provided by you, the developer. Thanks to
these signals, the assistant can often respond to general
requests like, “Play a game” with a few different options
from third parties. Proving discovery is foreign for us, so
you can expect ongoing investment and improvements in
area. Once users have found your app, they want frictionless
assistive experiences. We are committed to enable you to build
for innovative new-use cases, so, in the last couple of
months, we’ve exposed specific assistant capabilities to
developers. For instance, developers can now transfer
their apps’ conversation from a voice-activated speaker to a
mobile phone mid-dialogue. We’ve improved
the voice UI abilities by giving them mishaps. We’ve introduced
a proactive update feature in developer preview which allows
you to request users to register for regularly scheduled
updates, or even push notifications. This opens new
doors for app re-engagement and usage. Imagine being able to
connect your user each day to remind them about an upcoming
event or provide them with an urgent alert directly through
the assistant. In order to leverage all these features, it
is important for us that the development process is smooth.
The actions console is your central hub for development. It
helps you work as a team, choose the right tool for development,
and collect data on your app’s usage, performance, and
user-discovery patterns. It’s integrated with the Firebase and
Google Cloud console so easy to incorporate into your existing
projects. Income tax to the console, we are providing you
access to the developer tools to allow you to build apps for the
system. We’ve worked with a number of tools companies to
make their solutions compatible on Google, and expanded the
capabilities of the newly renameddialogueflow, featuring
an inline code editor and analytics. One tool I’m proud to
promote is Templates. These allow you to build a fully
functional high-quality app for the assistant with no code at
all. Just pick a template type such as a trivia game, fill in a
spreadsheet with your content, and you’re ready to publish in
minutes. I want every single one of you to try this as soon as
you can. I promise it is that easy. While we’re still in the
early days of the platform, we are focusing on making it more
robust and expanding its reach and capabilities. We support the
Google assistant SDK using it to embed the system into your
hardware devices, or with our smart home integration, it’s pop
to build IOT devices controlled from the assistant. We are
working to open up to new languages for actions on Google.
We launched Australian and Indian English, as well as
French, Korean, Japanese, Spanish, and other languages. We
are excited for the road ahead, at we want more of you to join
us for developing the platform. New capabilities like corrective
updates, and an improved developer experience, we think
this is an incredible opportunity for all of us. The
magic of the assistant is enabled by Google’s deep
investment in AI and the cloud. To tell us more about that,
please welcome Kaz. [Applause].
Hello, everyone, I’m Kaz Sato, a developer advocate. I would
like to introduce machine-learning solutions and
services from Google. And AI, machine and neural networks are
the scientific definitions, but you can say the science to make
things smart, like a building and automatically driving car,
or drawing beautiful pictures. There has been many approaches
to the division of AI, and one of them is the machine-learning,
or ML. With ML, you can programme your computer with
data, not with a programme written by human programmers. So
computers can find certain appears from data to solve
various problems. In ML, there are many different algorithms,
and one of them is deep learning, or deep neural
network. Since 2012, we’ve been seeing a big breakthrough in the
area of neural networks, so Google has been making a
significant investment in developing a neural network
technology. Google has been deploying deep-learning
technologies in 100 products such as Google Search, Android,
maps and Gmail. For example, key words, deep-learning algorithms
objects the in pictures so you don’t have to add labels or tags
by yourself. Inbox mobile app has the smart reply feature that
uses the processing technology to generate replies for each
email thread, so you can use one of them to reply on this
thread. Over 12 responses of the app are
generated by the feature. Google translate has
introduced new translation model, generating
the natural – and now, Google is focusing on externalising the
power of learning to customers and developers. One of those is
Cloud ML API, pre-detained ML modelling such as recognition,
speech API for voice recognition, and natural
language processing. Another ML solution is TensorFlow.
TensorFlow is it an open-source software for machine-learning
and development that allows you to customise your own
machine-learning model. It is the standard tool inside Google
for any ML learning. Google has open-sourced it in November
2015. TensorFlow is scaleable and portable. You can start
running TensorFlow with your laptop, then learning it with
GPU with tense or – because TensorFlow is scaleable, you
don’t have to change your TensorFlow – to bring it to the
landscape or distribute it. Ones you’ve finished your training
with the TensorFlow model, you can run it on various devices
such as smartphones and everything. With those benefits,
TensorFlow is the most popular, and the most popular
deep-learning framework in the industry now. It’s used by many
large enterprises for their production cases. Let’s show a
demonstration called “find your candy”, a demonstration that
demonstrates the machine-learning API and the
TensorFlow as a total NL solutions. Let’s look at the video.
Jo so click on that and speak into the mic. May I have some gum? Jo
It understood what you said. Now it’s going to natural language
processing. It is identifying the noun there, and gum, so it
will then match based upon the model that’s been modified. Come
on, come on! And it is picking chewing gum. And there the
camera identified extra long-last water melon gum. That’s great. I get to keep
this, I’ve got seven boxes back there. Thanks so much. As we saw, it provides you with
a real world ML solution. That allows you to learn the latest
deep-learning technology to solve your business problems
today. With that, I would like to invite Anita on stage to tell
you more about TensorFlow lite. Thank you. I’m the technical
project manager with TensorFlow on the Google Brain team.
Bangalore is my home town, and I’m really excited to be here
with all of you. Go Bangalore! [Cheering]. I will be
introducing you to TensorFlow lite and why we need an
on-device machine-learning library. On the one side,
machine-learning traditionally has been run on powerful
machines with a tremendous amount of computing power. On
the other side, mobile devices are ubiquitous and are getting
more and more powerful. Some of these devices have more
computing power than what NASA had when they first sent a man
on the moon. Think about is for a minute: we essentially walk
with super computers in our pockets these days. These
strengths enable us to ship some of the machine-learning
workloads from the cloud back to the device, specifically
enabling machine-learning inference on mobile and embedded
devices. Thus that pushes the boundaries a little further.
There are several reasons why on-device machine-learning is
useful. First, application developers might want to
maintain functionality and do inference while offline. Second,
applications may have low-latency requirements in the
order of milliseconds and can’t afford a round-trip back to the cloud. Third, specific-sensitive
applications might not be able to leave the device. There is
also a need for the applications to work under low bandwidth
where you don’t have the luxury of downloading a huge model at
the time of inference. Fourth, processing powers needs to be
done without turning on power-hungry radios. These are
some of the motivation s to do on-device ML. Even though
on-device ML sounds like a great idea, mobile devices come with
many challenges and have to operate under constrained
environments compared to their work station counterparts. There
is limited network bandwidth, limited memory, sometimes, even
limited computation. At the same time, these mobile devices have
aggressive release and engineering cycles which means
there’s hardware heterogeity which means there
it machine-learning. We decided that making a production those
sole focus was mobile device assist essential. TensorFlow is
primarily for large devices and TensorFlow lite for smaller
devices. Putting it simply, TensorFlow is a machine-learning
library to do inference on mobile and embedded devices is
that easier, faster, and smaller. TensorFlow lite has
support for Android, neural network API that enables
hardware acceleration leveraging custom accelerators on the
phone. We release the first developer preview of TensorFlow
lite a couple of weeks back, and we have support for popular
image classification models as well as text-based multiply
model.. we can’t wait to see what you all come up with using
this ondevice inference library. Please welcome al on stage to
ankle tell you more about Chrome. Thank you. — Talp hi, my name is Tal, and I’m
from the chrome team. I’m excite ed to talk about some of the
improvements we’ve made on the web over the past year. The web
is big, with over 2 billion instances of chrome. We know
that the Webb has tremendous reach. One of the true strengths
of the web is that it is bigger than any single browser.
Regardless of whether the device is a smartphone, or a laptop,
or a desktop, or a tablet, they all have a browser. So any
web-based experience is available on these billions of
devices today. We’ve seen this have a real impact on how many
users’ web apps are reaching. We’ve also seen how quickly
mobile has been growing. Native apps have been growing at
an incredible pace with it. But what is really remarkable is
that even with the web’s large initial reach, we’ve seen the
average monthly web audience growing even faster. And because
of this growth, we are seeing the web expand into new areas,
with experiences like WebVR being built on the web platform.
With the web pretty much everywhere, we are constantly
trying to push the boundaries on what it can do. Over the past
year, we’ve shipped hundreds of additional APIs that cover a
range of capabilities nor making it easier to integrate
payments, to building fully capable offline media
experiences, directly on the web. But beyond just these core
capabilities, we’re also ensuring that the mobile web
works well with the India Stack. For example, with our payment
request API, it is easy to tie into popular payment methods for
every region, so, in India, we’ve made sure it integrates
with Tez integrating with local businesses, banks, and India’s
unified payments interface. With all of these capabilities, the
modern mobile web also allows developers to build deep, rich
mobile experiences with something that we call
Progressive Web Apps, or PWAs. PWAs are about helping web
developers leverage the web’s new capabilities to build
high-class experiences that really feel immersive. They can
load quickly, work offline, and you can send notifications to
users. We’ve seen a number of amazing experiences taking
advantage of these new capabilities. As just one
example, there’s Ola, a popular ride-sharing app here in India
who built a Progressive Web App to reach users in tier two and
three cities. Here, they have a polished, fast immersive
experience that works on any connection, sending users
notifications, and it’s built completely on the mobile web, so
it is already accessible on billions of devices. We are
excited to announce that the reach of this PWA technology is
huge. As the core technology powering this is now supported
across top browsers, including UC browser in India. With the
ability to create such immersive experiences like this, realso
want to make sure you can get back to it really easily. Add to
Homescreen has allowed users to add an experience to their home
screen. With our improved Add to Homescreen flow, when you add
a PWA into the home screen, it is fully integrated into the
platform, so it feels like any other app experience on their
device. It will appear in the Android luncher alongside your
Android apps and appear in Android storage settings. Since
it is a PWA, it is inherently small, to users are able to get
an immersive experience without requiring extensive storage
space. This fast integrated improved Add to Homescreen flow
is available now. So, with all of these new capabilities, we’ve
also been working to make sure it’s easy for web developers to
build these experiences. We will be going into a lot more detail
on how to develop PWAs throughout the mobile web track,
but no matter how you’re building your web app,
lighthouse is a tool that can show you how to improve your web
experience. It quickly audits your site to identify how you
can improve your app’s performance, accessibility, and
progressive web appiness, and we are excited to announce, as of
M60, Lighthouse is now directly integrated into DevTools, so now
you can quickly see how your website is doing and what to do
next, directly in Chrome. With all of these tools, we’ve seen
just how easy it can be for companies to take advantage of
these new capabilities for their web experience. To give another
example, there’s Voot, a popular video-streaming site,
also based mere in India, and it’s an experience built on the
web, so you can get to it directly. It can be easily
accessed from the launcher or the Android home screen. When
you get it, you get a high-class immersive experience. It
automatically rotates to allow for full-screen experiences, and
with some of the newest APIs, they can support downloading of
videos and offline playback. Since it’s built on the web,
users can get this entire experience immediately on their
devices. This is just one example of
many. Leveraging the modern mobile web is now the norm
around the world. Whether they’re building a PWA from
scratch, or leveraging the latest capabilities on their
existing web experience, companies everywhere are seeing
a tangible impact on their key metrics. With the modern mobile
web, it is possible to easily build immersive, fully capable
experiences that can reach billions of people around the
world today. And now, let’s turn our focus to what we are doing
to make it easier to develop apps and grow your business.
Please welcome Francis!
[Applause] Hi, I’m Francis, and I lead the
Firebase product team. Our mission is to help developers
like you build a better app and grow a more successful business.
At IO2016, we expanded Firebase from a set of backend as much as
s to a broad mobile platform to help you solve many of the
common problems you face across the life cycle of your app, from
helping you build faster and easier products time database to help you
better understand and grow your users with tools like analytics
and cloud messaging. Whether you’re starting something new or
looking to extend an existing app, we’re here to help you so
that you can channel more of your time and energy towards
creating value for your users. And we make this available all
through a single easy-to-use SDK available across platforms. To
date, there are over 1 million developers that have used
Firebase, and we are humbled that so many of you have trusted
us with your apps, and we are committed to helping you
succeed. Over the last year, our team has made many updates to
Firebase, and I like to highlight a few of these. First,
let’s start with backend services where we provide you
with the core building blocks that help you build your apps
faster and easier. One of these is Cloud Fire store, a no no that — a NoSQL in a
files automatically. It makes it a lot more intuitive for you to
structure your data. It is also fully auto managed and built on
Google’s global infrastructure, so that it will scale with you
and you don’t have to worry about managing your machine
sizes, RAM allocation, or networks. Now, fire store, like
other other Firebase products also works with Cloud Functions
which gives you a way to deploy your JavaScript code to the
cloud and execute it based on HTP request or through other
events happening acrossfire base. So, for example, you can
write a function to extend Cloud Firestore to do some service
processing like data validation whenever a document is uploaded.
With Firestory function and is other backend services, it will
scale with your workload from prototype to planet scale free
to you managing your own servers. Let’s switch gears to
talk about some other updates that could help you better
understand and improve your app stability. Since welcoming the Fabric team to
Google earlier this year, we’ve integrated crashlytics into Firebase which
is our crash reporting product that helps you monitor crashes and errors.
It is also really important to understand how your app
performance out in the field because users often abandon
slow-running apps. That’s where Firebase performance monitoring
can help you better understand how your app performance across
a diversity of devices and network conditions. Now, with
just one line of code, you can get insights into your app’s
start-up time and network latency, as well as adding
custom metrics, you can understand how your app
performance through those user flows that you really care
about, and this is a great way to find the bottlenecks in your
app that can be impacting your user engagement and your
business bottom line. Income tax to helping you build a better
app, Firebase also helps you grow and engage more users.
First, let’s talk about cloud messaging, or FCM, which gives
you an easy way to send notifications to engage your
users. FCM is integrated with Analytics, so it gives you many
options to send targeted notifications to different user
groups or app versions. Another great way to drive user
engagement is by creating a personalised experience, and
remote config helps you do that more easily by enabling you to
change your app’s configuration remotely and at run time. It is
also integrated with analytics so you can fine-tune and
customise your app experience to different user segments or app
versions. Now, many developers use FCM and remote config to
create a more targeted experience, but we’ve also heard
from many of you that you want a more easier and powerful way
to test different variants. For that, we’ve recently released
first-class A/B testing support for Firebase. With A/B testing
you can test different messages or values to different groups of
users and it will help you figure out which of these
variants performed best for the goals that you specified. For
example, you can figure out whether the orange button or the
blue button helped drive more user purchases. I’m also very
excited to share that we’ve taken our first step of bringing
Google’s machine-learning to Firebase with the release of
Firebase predictions. Predictions applies ML to your
analytics data and helps you predict users’ behaviour like
churn, spend, or other events that you have specified that are
important to your app. Now, it’s also integrated with other
Firebase products so you can take targeted actions like
triggering an in-app promotion uses promote config to users
most likely to spend, or, say, send a push notification to
target users who are likely to churn or run A/B tests across
these different groups. I’m excited to be here sharing these
updates on behalf of the Firebase team and many of you
here. I look forward to hearing your feedback and continuing to
work hard to help you build a better app and grow a more
successful business. Thank you. [Applause]. With that, I would like to welcome a … back for some final remarks.
Thanks everyone for joining us this morning. I hope you’re all
as excited as I am about the progress we’ve been making with
our developer product and platforms. Thank you to our
speakers, Dan, Sachit, Taz, Anita, Tal and Francis. For the
rest of the day, you’re invited to participate in technical
sessions, trainings, code labs, and explore the Sandboxes right
outside. Please enjoy the Google Developer Days India Event. Thank you. We’re happy that
you will be joining us for talks, hands-on training and
more. Your badge must be visibly worn at all times and don’t
forget you’ll need it for the amazing after-party at the end
of the day. The help desk is located near registration. If
you have any questions for our assistance, please feel free to drop by. Be sure to visit the
rooms in the conference centre where instructors will teach you
how to use the latest Google technologies. No Google event
would be complete without showcasing the newest products
and technologies, so we invite you to explore different.com
emos, office hours, and review clinics. Be sure to check out
the Community Lounge and certificationcertification
lounge which are also located in the hall 3B Sandbox area. There will be
places to sit, relax, and meet with your peers. We are looking
forward to a fun two days with you and would like to take this
opportunity to remind you that Google is dead indicated to
providing an inclusive event experience by everyone, and by
attending, you agree to our code of conduct, which has been
placed around the venue and on the website. Thanks for
attending Google Developer Days India and have a wonderful time
exploring everything that Google has to offer. Ladies and
gentlemen, please make your way to your chosen break-out session
which will begin in five minutes.
>>Ladies and gentlemen, please make your way to your chosen
breakout session which will begin in five minutes. entlemen, please make your way begin in five minutes.
SAM: Namaskara ラ suswagata! . Welcome to Bangalore. For
those of you from Bangalore, thank you for welcoming us to
your wonderful city. My name is Sam tax-cuton, and I’m a
developer – Sam Dutton. I’ve only been here a few days. I’ve
already met a brunch of brilliant developers doing great
work on the web. I live in London, but I grew up in South
Australia, quite close to the beautiful Adelaide cricket oval.
I don’t know if anyone has ever seen this? Anyway, I’m here
todaytoday and tomorrow with a lot of other Googlers who are
web developers who work on Chrome, so, if you have
questions, please come and chat to us in the Sandbox area. So, I
think we are at a turning point for the web, and I would like
to explain why I think Progressive Web Apps are at the
heart of that. This is my third visit to Bangalore.
Since the first time I was in Bangalore, there has been a
revolution on the mobile web. In that time, the explosion of
mobile usage has completely transformed the landscape of the
web. Of course, we’ve seen a huge growth in the number of
people who own go online on mobile, even in the United
States, as these figures show. Of course, there is a massive
increase in internet usage. China, as you can see, still on
top. Check out the growth in India. That’s just fantastic.
There’s massive potential for growth on the web, and, again,
this is where India really stands out. I was just looking
at a kind of round-up of devices, and noticing like this
increases access to affordable devices this these phones you’ve
seen, like they’re 3,000 to 5,000 rupees, and their specs
are good – quad core, a gig of RAM, maybe 16-gig storage. You
can run some pretty powerful websites on these deviceses. We
have had the crazy price wars, with lots of factors
driving down data prices. Now, data in India has a cost at well
below of two per cent GDP per capita which has been the
threshold as affordability for most people.
But, it’s not all good news. Globally, 60 pearls of mobile
connections are still 2G. You know, even in London where I
live, you will find lots of areas with poor cell coverage,
or no connectivity at all. In the US, as you can see, a lot of
people do not have access to fast broadband. It is the same
on broadband where infrastructure is often poor,
and we see a similar picture in many regions. Back in India,
most mobile connections are still predicted to be on 2G even
by 2020. Mobile-only doesn’t mean that everyone on the web
has a tie tech smartphone. — high-tech smartphone. Most are
on older, low-spec phones and there are even newer low-spec
phones, the Jio phone, the Alcatel Go Flip which has been
an accessible upgrade for some users. Not least the hundreds of
millions of feature-phone users who want to buy pizza and watch
Kabaddi online. This is all a major challenge to web
developers. We are at a turning point for the web, and let me explain with a little history
where the web has gone. Does anyone actually know what this
is? This is a modem. This is how we used to connect to the
internet. Back in the days when 5656kilobits a second was
optimistic. Does anyone remember this? Dynamic HTML? It spread
through the web and was powerful, the use of JavaScript,
but it also gave us dysfunctional slide-out menus,
crazy page transitions and so on. This toxic combination, way
was over-optimistic coding and poor connectivity meant that
that some sites in those days would take several minutes to load. So, DHTML, their use of
JavaScript was powerful, and gave us some interesting stuff,
but also a lot of poor experiences. And, to be honest,
a lot of us as developers wilfully ignored the
consequences of using JavaScript in that way, and all that
peaked around the time of the dot com bust, around the turn of
the millennium, which coincidentally coincides with
the demise of DHTML, but out of the ashes of the dot com boom
came a new generation of pared performant websites, sites that
really worked. At that point, we saw
sites making good use of the web’s capabilities as well as
its reach. For example, early online maps were like a
revelation, but still very slow and clunky to use. Google’s use
of Ajax for Google mobile apps and Gmail like mid-2000s really
completed in transformation of the web that we call Web 2.0.
More recently, we’ve seen an incredible transformation as the
web platform has become increasingly capable. As you can
see from this slide, which I stole from Tel’s keynote
presentation. The web had the reach, and then it got
capabilities to match native apps, so, everything is cool,
right? It’s fantastic. Well, the problem has been that, with
all this capability, and perhaps because developers are often
living in a world with great connectivity, brilliant
broadband, we started seeing this kind of stuff, sites with
poor page-load performance, and a huge number of requests on
first load. So, just kind of rather like the battle days of
the dot com boom and DHTML, some developers seemed to be
ignoring real users with real-world connectivity. So you
probably heard this many times. We were going to keep showing
this slide until things changed. 19 seconds is the time it takes
the average mobile web page to load on 3G.
It is a miracle anyone ever loads some pages, I think. And
pages have become heavier, too. That’s a problem for
performance, but also for many users, data cost is a major
constraint to internet access even more than connectivity. Of
course, heavy pay loads don’t just add to data costs and radio
usage, the assets required to run an app can also use up
limited storage space. So it’s really important to keep apps
light. I think Progressive Web Apps are really a response to
all this, and an attitude to building great web experiences.
Progressive Web Apps make the most of the web’s reach and
capability but they also showed respect to users by being honest
about real-world constraints and the constraints of devices.
But what does that mean in practice?
Well, you know, as a brand, Progressive Web Apps has been
extremely powerful, very quickly. You know, I think we
hit peak PWA when I came across this article in the Grocer
Magazine saying that Progressive Web Apps are cool. You know,
you know a technology brand is working when it is recommended
by a journal for supermarket executives. But what what does
this mean for developers? I think Alex Russell really
nailed it with a very specific checklist of the entry-level
features that should be part of every website. If you haven’t
ready his blog post about this, you should check it out. This is
a manifesto for great web experiences. My colleague Chris
will sob has recently coined this acronym – FIRE. This really
sums up what Progressive Web Apps are about. The expectation
from users is that web experiences will be fast,
engaging, and reliable, and that they’re integrated with device
hardware and platforms, and with other apps. So heredity — this is why we’re at a
turning point for the web, successful developers with
getting to what works and what matters to users.
A huge leap forward in capabilities. It has been
combined with resilient-performant design.
Indian developers have taught us you don’t need perfect
connectivity to provide a great online experience. That’s where
Bangalore, I have to say, really is leading the world, I think.
The flip side of this is that we get this huge rise now in user
expectations. The web gets better, so users expect more.
The bar is high and getting higher. It is not enough just to
provide a service and expect the reach of the web to make you
successful. A great example from the Chrome dev summit here.
It is functional, the information is kind of there, it
kind of works. But users expect something more for you. The new
site on the right is a much better experience. It is a
beautiful piece of work, I think. The point here is that it
is no longer enough just to provide a service. So, a big
part of what makes Progressive Web Apps successful is that
multiple browsers are committed to the technologies that enable
them, while developer adoption is growing for Progressive Web
Apps, and so is browser support. Our Opera, Samsung, Mozilla and,
and service worker is on Safari tech preview as well as cache
API and many other features already in place in Safari. One
little furry creature has been missing from this list. UC is a
popular browser in India, and other countries, and I’m really
pleased at the incredible work from the engineering team at UC.
What they’ve done to move their browser to the Blink engine,
making it a great platform for Progressive Web Apps. So, with
this in mind, I would like to welcome on stage, a very special guest.
>>Hi, everyone.>>Good to see you
here. UC now provides basic support for PWA features,
including service worker, the cache of a PI and for Fetch. But
the great news is that in 4.0UC will provide full
support for web push and Add to Homescreen and fully embracing
Progressive Web Apps on the platform.>>Do you have any
questions about the UC browser, please contact us via the email
on the green. better future.
SAM: The UC will be available in the Sandbox. Please come and
talk. Thank you. Remember what I said. Apps need to be fast,
integrated, reliable, and engaging. I would like to dive
into that in some more detail. You know, leading an app has to
feel instant, the invisible process. Remember that most
users will absites that take more than a few seconds to load.
There are some great sites we would like to call out. Ele.me,
a Chinese site with 260 million users – almost Indian scale. listings. It’s interactive in
two seconds. So, you know, check obviously the code. It’s really
worth a look. They’re doing a great job there. Page-load speed
is a critical goal of the Amp project. They took a radically
realistic approach building for the real world, and if you want
to learn more about working with Amp, check out Ben Morss’s
presentation, and we have an Amp training session tomorrow
afternoon. As you find out, Amp and Progressive Web Apps can
work really well for building sites and apps that start fast
and stay fast. So, like I said, web apps should be kind of
invisible. By that, I mean you need seamless integration
across devices, platform, and context.
Apps should not get between users and what they want to do.
In fact, I think people should not have even to think about the
fact that they’re on the web, a native app or whatever. It is
that they’re using their phone or tablet or whatever to
complete a task. The case in point, e-commerce, it is all
about removing friction by intergrating across device and
platform and context. Just to be clear, you know, mobile
commerce is a huge deal. Last year, mobile commerce was worth
$123 billion in the US alone – it is incredible. It’s no
surprise, given the rise of mobile computing, the majority
of commercial traffic is coming from mobile devices. Now, what
is surprising here is how much of that mobile commerce is
actually happening on the web rather than in native apps.
However, conversion rates for the mobile web as many of you
will know with far lower than for desktop websites. Mobile
conversions are about a third of desktop conversions, and this
is a fundamental challenge for the web. You know, the web has
gone mobile, but conversions on mobile remain low. That makes
sense in many ways, because entering data on mobile is hard.
Please, if you do one thing today: mark up your forms
properly! It’s easy. Add autocomplete and type
attributes. This is so easy. Do a pull request today. Make us
all happy! It’s just makes your users lives’ much, much easier. But, as much as autofill is
great, it is really not enough to kind of transform the web for
e-commerce. The PaymentRequest API goes a step further. This if
you don’t know with it’s a W3C standard for browsers that
provides an interface for users to enter payment and shipping
data, so customers get a consistent experience across
platforms, and developers don’t have to reinvent the wheel. Even
a tiny boutique or a giant e-commerce retailer. What is
interesting in India is the rise of debit cards and all
varieties of online payments. It’s predicted that, by 2020,
more than 50 per cent of India’s internet users will be using
digital payments. The top 100 million users will actually
drive 70 per cent digital payments which is pretty
amazing. Another area where eve seen fantastic integrate on the
web is with media. We’ve given AP he is and developers the
ability to control owe bust, secure, and efficient
experiences on the web. We’ve made it possible to download and
consume media online. With media, it is particularly
important to get right over 70 per cent of variety traffic as
video, and that number is increasing, and especially of
course here in India, with what has happened with 4G and so on,
becoming something that users just expect on the web. Media
companies are seeing a lot of success here. Jio cinema, you
notice, was like an app-only business, went to web for the
first time a couple of months ago with this PWA. They’re
already seeing like ten per cent longer session time on average
than for their native app, and better reach for their customers
– tier two and tier three cities, which is fantastic. If
you want to learn more about media capabilities, I really
recommend looking at this. There is a brilliant Progressive Web
App for media. It’s bit.ly/pwa-media. It gives you custom controls thumb nails,
and downloadable video. It is a great place to start if you’re
thinking of implementing video. This technology really opens up
the wed to even more platforms, with the webVR, we see this
coming to the web, companies like SketchFab, lots of scenes
to explore. Anyway, enough of that stuff. Just getting back to
basics, I want to talk about reliability for a moment. In
order for WhatsApps to take a place on the home — for web
apps to take a place on the home screen, they need to to be
reliable. We’ve become conditioned to think that the
web only works with a live network. This is where service
workers come. I want to ask who has heard of service workers and
has a good idea of what they are and why they’re
game-changing for the web? Who has worked with service workers?
That’s pretty good. Okay. The traditional web model for toes
who haven’t worked with a service worker is that the
browser goes to the network and looks up the web server and asks
for a page and its resources. The browser has an HTTP cache
but the developer can’t control that. If the network is down,
you know, you get a visit from our friend, the dawn Downasaur president that
can be worse with flaky connectivity. With service
workers, you don’t necessarily need to traverse the network
every time. The service worker is a client-side proxy that acts
as an intermediary between you and the outside world. It is
great for handling an offline connection or unreliable
connectivity. For example, you can implement time-outs for
network requests to make sure that users are never kept waiting. Service workers are
pretty straightforward, but as you can see here, service worker
design can get complex for more complex caching strategies in
the real world. This is where Workbox comes in,
making it easy to build successful caching strategies
into your web app and enables you to support offline and
handle unreliability connectivity. The training team
will be rubbing a session to teach up how to use it this
afternoon. All these pillars like fast, reliable, lead to
engaging. I want to show you one app that does that particularly
well. For those unfamiliar with Trivago, it is one of the
world’s leading hotel search engines, operating in 55
countries. And, it shows how investment in reliable
experiences really pays off. That’s because Trivago use
service worker to build a really resilient web an. I think that
service worker and cache API mean network resilience is
becoming the norm for high-quality web experiences.
Successful sites like Trivago are really embracing that. Now,
their PWA is really providing business value as well, like a
huge increase in click-throughs to hotel offers for them. As Jenny Gove will be
explaining in her UX talk later, we know the Queens of
getting apps to home screen, if it is something turned to
regularly, and again, Trivago is doing a great job of this. As
you can see, the codes to enable Add to Homescreen, it is really
straightforward. You know, in a manifest file, you specify like
a title, icon, other details of how your app appears on the
user’s device. You add a link to the manifest,
the HTML. You can add this to your site in about five minutes.
All in all, Progressive Web Apps provide the reach of the web
with the capabilities to improve engagement on mobile. Now, I
know this all sounds great, but reality sinks in been we go back
to our day jobs. Building a Progressive Web App can seem
like a huge undertaking.
Implementing PWA techniques does not have to be a monolithic
refactoring task, and I want to talk about some ways to get
started. Progressive Web Apps need a solid foundation. No
amount of PWA magic is going to fix, like, blocking JavaScript
or bloated images. You need to fix those problems first before
implementing PWA techniques. I mean, when it comes down to it,
Progressive Web Apps are websites, yeah? So there are
some relatively simple changes that you can implement that can
have a big impact on performance, and for security,
reliability, and data cost. Now, rather than trying to talk
through these large numbers of bullet points, I recommend you
take a look at the guide linked to here. This, like I say, shows
how to make a number of relatively simple fixes that
will improve any website. These are low-hanging fruit. Really,
like table stokes for building a Progressive Web
App. We’ve been working on training resources for
Progressive Web Apps, and also on a certification programme,
so, on stage now, I would like to invite Sarah Clark who
manages our training team. Thank you, Sarah! Sam.
There we go! So, I’m Sarah Clark, the programme manager in
charge of web training development and certification.
Probably a few of you have seen me on the Google Developer India
YouTube channel teaching Progressive Web Apps. I came out
here to Bangalore in February and that’s where we filmed the
class. Remember this in the first session? These are
features we’ve only added in the last year. The web is getting
new features at incredible speed, and best practices are
changing so quickly, that how do you make sure your skills are
up to date? Now, we have three things that have been out for
most of the year, or longer. The site will put you
to all of them. We put the entire PWA course on
developers.Google.com. We have additional courses on
Udacity. These are free. There is no cost for any of them. The
courses we’ve published recently, the classroom sources
are all open source. If you want to take it if and teach it in
your company or if you want to teach it commercially, talk to
me, and we will get you the materials. My team has built
the mobile web specialist certification programme and we
announced this a month ago at Krakow. We looked at 10,000 job
descriptions to look at what hiring managers want ed to. It
is not always the absolute latest thing but it is what gets
you ready for everything we’re talking about here. So, it is a
global certification, so we looked at jobs around the world,
and it includes the skills you will need for international
markets, so, for example, with your building for US or Europe,
these are core skills you will need for there. Certification is
all online. It is a test that you take where you solve about
14 to 17 real programming challenges in four hours. It is
not super easy. Most developers have to study, and most
developers need at least three to four years of real world
experience, or some pretty intensive study to get through
it. SAM: I can vouch for that. I did
the test. It is really hard! SARAH: The best thing being is
that it guarantees, hey, you know your stuff. We provide a
fee study guide, or make sure your skills are up to date. Feel
free to take a look at that. If you saw the news last week
and it was mentioned in the opening session, we announced
130,000 scholarships here in India last week. 30 thousands of
those are for Udacity courses, including 10,000 of those
include the study course leaving up to the certification and the
cost of the certification. 100,000 of these are
subscription to Pluralsite where there’s a lot of useful
material you can learn from. There’s a lot to remember.
Remember developers.google.com/training/
India, getting you to the India-specific page with all the
information including these scholarships. Same, let’s wrap
this up. SAM: Thank you, Sarah. I would
really recommend the training as a great way to keep Europe
skills up to date. Anyway, so, to get started with Progressive
Web Apps, we have some great sessions as you can see coming
up today and tomorrow. More technical content, and we also
have expert Googlers onhand, people from the Chrome team to
answer your questions and solve your problems in the web area.
Come and check out our stuff in the Sandbox and the Lighthouse.
SARAH: The web team is running 390-minute workshops, how to
build web apps that work with data online and off; using work
box from work site PWA, and then tomorrow morning, combining
accelerated mobile pages and PWA. They’re in the other
building. And come on over. They’re free. By your laptop,
and we will get you through some really interesting stuff.
SAM: If you want to learn more about PWAs, I’ve put together a
lot of links at bit.ly/pwa-resources. Great
stuff there from external experts. Lastly, always, if you
want to learn about what is new on the web, check out web
fundamentals, our content. It is right at the forefront, stuff
from people who are building browsers and working on web
specs. That’s it. Most of all, please feel free to contact
Sarah and me, and speak to any of the Googlers here.
SARAH: Thank you. SAM: Thank you so much. We are
here to learn what you’re doing on the web, and find out what we
can do to make the web better and help you build great
experiences. For all your uses. Thank you so much. Thank you. [Applause]. and speak to any [Applause].
SEAN: Hi, I’m Sean McQuillan. Today, I’m going talk about how
Kotlin can help you write great code. I will dive into typical
test code that you write every day and adding full warning.
Most of this talk is code been I thought I would start out by
talking a bit about what Kotlin is and why we’ve decided to
support it for Android. Kotlin is a modern programming
language. It has type inference, first-class functions, lambdas,
co-routines, and all the features you would expect from a
modern language. It borrows the best features from other
languages to allow you to write less boilerplate code and write
better software. Kotlin is a language built for industry.
What do I mean by that? It is expressive without sacrificing
performance. It is easy and natural to write Kotlin code
that has the same runtime performance as written in the
JavaScript language. It keeps enough boilerplate to scale your
source code to hundreds of thousands of lines of code or
more and does it by never sacrificing readability for
abstractions. There’s just enough boilerplate. When it adds
abstractions, Kotlin has chosen to keep readability and it is
possible to find out what a line of code does by reading the
source. You know it is going to have great tooling and is fully
supported in Android Studio. Kotlin wouldn’t be interesting
if it didn’t work with our existing programmes, so if it
was hard to learn. Kotlin works with existing source code,
libraries and frameworks so well that it is really easy to
extend your existing programmes with new abstractions. I will do
that a lot later in this talk. It is also really easy to learn
Kotlin. There is not many surprises. Most of the things in
Kotlin work exactly the way you’re used to from the Java
programming language. It uses the boilerplate and adds
suppressive new features. I would like to introduce you to
our Testing Ninja, our mascot at Google for testing on
Android. Since I will be talking about
testing today to motivate Kotlin, I thought it would be a
good idea talk about what kind of test code we should be
writing for our Android apps. I will revisit some of the
content at IO 2017 about how to write Android apps. We want to
write unit tests which tests an isolated unit like a class or
method and does it by isolating the dependencies by using mocks.
You can get an extremely if I can feedback, and unit tests
execute an order of magnitude faster than other kinds of test.
So, executing thousands of unit tests takes seconds, not
minutes. This quick debug cycle lets you be confident as a
developer, finding areas as you introduce them into the code
base, not minutes or hours later. The modules that don’t
have Android dependencies should be tested using, it, and
Robolectric is a good solution for isolating your code from
Android. We recommend about 70 pearls of your tests be written
as unit tests. Next in our testing story, we have
integration tests which differ from unit tests because it
combines multiple components in our apps. You might test a
fragment, coupled with the database and the network layer.
Integration tests are more precise than end-to-end tests
although they are less precise than unit tests because they
test multiple features together. For these reasons, we recommend
running about 20 per cent of your tests as integrate tests.
Finally, end-to-end tests. Is everyone here familiar with he
is prosso? It is – Espresso. It is the UI testing framework
that allows you to do things like flick on a button and click
on an app state. It is the best way to ensure your app works
because it works at the full intergrace level all the way at
the UI. However, that does mean it’s
much, much slower than unit tests. More, if you only write
he is prosso tests for your app, it’s difficult to find out what
line of code caused the test failure because the test runs
against the fully integrated app with all of the components
working together. At Google, we recommend about ten per cent of
an app’s test be written using end-to-end testing, and use, for
example, you want to make sure your sign-up form is exercise
but then do exhaltive testing with other kinds of test. Let’s
get back to Kotlin. That’s enough testing theory for today.
I’m going to dive into simple unit tests. They’re probably
similar to tests that you write every day. I’m going to use
Kotlin features to build abstractions and improve
readability while cutting out boilerplate. Let’s talk about
how to extend existing APIs in Kotlin. Will I writing a test
for this activity which has a help menu item. The help menu
item is removed into some code paths so I want to make sure
it’s added in the default case. If you’re familiar with
Robolectric, these should be easy to follow. If not, I will
explain as we go along. Our first attempt at implementing
this in Kotlin might look like this. To get the menu, we asked
for the shadow of the activity. The shadow is a wrapper class
provided by Robolectric that adds extra functionality for
testing. Next, we use the shadow to get the options menu from
our activity. The options menu is provided by Robolectric. You
could have instead called onPhoneSoThere but this
simplifies the — Oncreate. This is the first edit we will make
for Kotlin. Accessing getters and setters in Kotlin is done
with the properties. It’s going to be called the getter but you
don’t need to type it every single time. Let’s go ahead and
look at that shadower method. By looking at it, we can tell it
is a static method that takes the single parameter. In fact,
since Robolectric is open-source, I jumped and found
out it’s, in fact, a static method that takes a single
parameter of an activity. This construction works extremely
well and we’ve all used static methods to extend the behaviour
of classes that we can edit, but wouldn’t it be cool if we could
add this method to our activity class but only when we are
writing tests? Kotlin provides extension
functions and extension properties to do exactly this.
We’re going to use them a lot today. An extension function is
an extension of an existing class, adding another member to
the class that already exists. Likewise, an extension property
adds a getter or a setter to a class that already exists. Let’s
look at how to do that. By using the extension syntax shown
here, you can at an extension function to the activity class.
This is extremely powerful, not just because it lets you avoid littering your code, the ID
snows about – you can also create extension properties.
Here, you can see the syntax for that, and you will see
properties have syntax like languages for C# or swift. If
you’re like me, you’re used to getting getters and setters in
your studio, you haven’t written one in years. In Kotlin, you
don’t have to write getters and setters if you’re using the
implementation. If you need to write custom getters, for
example, here, where we have a synthetic getter, the property
is completely synthetic, so the syntax puts the getter next to
the field declaration. Never again will you have to hunt
through the class trying to find a getter for a member variable. This is a great example of how
Kotlin makes code more readable by leaving just the required
boilerplate. Extension methods and properties are really just
static methods once they’re compiled. You can call them from
your Java classes by passing an object as an argument, and this
is really the key to understanding extensions in
Kotlin. They’re just static methods. They have the exact
same resolution as a static method in Java, and there’s no
fancy call semantics. You were not allowed to call an extension
method or property if the class provides a real member with the
same signature. So our test can now use the extension property,
like it was defined on the activity class. This shows how
Kotlin can improve the API of an existing library by adding
abstractions. Doing this makes our code easier to read as well
as write, and to make it better, if someone comes across this
and doesn’t know what the extension property does, Android
Studio will help them find the definition. There’s a bit of
magic here, but it’s clearly spelled out and easy to figure
out how it works just by reading this file and using the tools.
Now, it’s time to add an assertion to this test I had
will use assertEquals. This is a standard test. We checked that
the item was visible and enabled but there’s the keep is short
because we’re in a talk. We’ve talked about extension functions
and properties. We will keep building on extensions in this
talk and see how they can power API extractions to simplify our
code. Now, I would like to turn our attention to defining new
APIs in Kotlin. What we really wanted for this test because a
fluent assertion. A fluent assertion reads like this: the
item should have a title “help”.
By using extension functions, you can add fluent aSerbses
directly to your existing types. In this case, you want to make
abassertion on menu item. This is a really simple function to
write, and it shows off of the power of Kotlin extensions. If
we try to implement this in a language without extensions, it
would require building a large system of testing types to
expose the fluent assertion. Already, Kotlin is building up
our code here. I would like to add another feature. Kotlin
supports an infix calls which puts the operator in the middle.
To make a infix, add the key word. It can be provided to
member methods or functions that take a single argument. Should
have the title – should Have Title takes the string. We
should not have to use the dot or use parentheses. We’ve added
a fluent exertion that makes our code extremely readable. The
index assertion expresses what we want to say in a very clean
style. I do want to say, Kotlin give us as lot of power with
infix and operator overloading. Here cleaning up a testing API
and it makes sense. You want to be cautious while adding this to
your code of the readability whenever used. Next, we’re going
to cover a very powerful feature in Kotlin that we can
apply everywhere in our code base.
Kotlin provides reified types. Reified just means real. We
will be skipping over the set-up method in the tests in the
previous slides. That’s quite a bit of boilerplate. All we are
trying to say is set up activity. Let’s write that in Kotlin. SetupActivity is a
one-line function setting up the boilerplate. You can see how
the reify works. You can access its class which you couldn’t do
with a raised generic type. This is extremely powerful and all
of our code basis are littered with class arguments that Kotlin
can help us clean up. The real power is provided by – we don’t have to
specify the type repeatedly in this call. So far, I’ve talked
about how extension functions work in a few ways, and I’ve
also showed how to use infix and fluent assertions and use
reified types. Now I want to talk about lambdas. These are
functional literals, did due to syntax support in Kotlin,
they’re powerful as an abstraction. We will provide
them with other features to clean up our code. It is
somewhat novel if you’re coming from the Java programming
language. Let’s take a moment to cover the basics. You create a
lambda by surrounding a code block with curly brackets and
using the arrow syntax which provides an increment that has a
function literal the function literal is defined with the
curly brackets and the parameter it on the left side. It says
the value of integer and then the body is the value plus one.
We call increment which is just a variable with the two and the
value three. You can specify the type of the variable with the
arrow syntax like this. We are saying that increment is any
function that takes an integer and returns an integer. We
assign a lambda to the variable increment, and you can know that
value doesn’t need its type specified again. It is passed
through using type inferians. We are going to use that a lot to
clean up our code. Lambdas get even more interesting when they
are parse the to function. This is a function that takes one
argument of any type and then a function that takes any and
removes to unit. Unit is Kotlin’s way of saying “void”.
If you look at the call syntax for Apply, you can see that you
can pass a lambda to Apply outside of the parentheses. You
can pass a lambda this way whose function is a function
argument. This is really sweet syntax
sugar. You can use this call syntax to build expressive APIs
and cut tonnes of boilerplate. You can combine it with
extension function and is type inference making otherwise
tedious code beautiful. For example, you’ve all written
argument captors using Mockito and it looks something like
this. You call verify on a mock object and then pass a captor
argument to the mock call allowing your test to access any
prammer that’s been passed to the mock. By capturing a network
– it can fake a network rely and continue as if the network
had returned a result. Of course, you need to declare
the captor. Look at all that boilerplate. Argument captors
are repeated twice, network listener is repeated twice, and
even with all of that repetition, there’s still an
unchecked at the generic type. Let’s see what we can do in
Kotlin. In Kotlin, here’s an example of a clean-up API. It is
keeping enough code to be readable but removing all of the
extra boilerplate. Reading it, we can see that we make a
network captor, we use it to capture a value and then replay
the captor call. Let’s replay this in Kotlin. To get started,
we are going to create a type alias for argument captor. This
is a useful way to reduce repetition when a generic type
as long as you see it is here. The type alias can be used in
place of the expanded type but it doesn’t create a new type. It
is just an alias. Now, to define the network captor
function, use the type alias to simplify repetitive code. Type
aliases provide semantic meaning without having to write another
class. Network captor takes a function
argument called Verify. Verify is any function that takes no
arguments and returns unit which is Kotlin’s way of saying void. You can see that this function
type is an extension function on Netcaptor. I’m going to talk
about what this means in a minute. When we define a
function this way, it is called an extension lambda. Network
captor takes a function argument called – it actually capture an
argument, you need to make an argumentcaptor. There’s a bit of
magic here, all a function that has a complicated generic type
that we are not specifying. You might speck I would use reified
types to use argumentCaptors. You’re right. It is a function
to produce a correct argumentCaptor. By signing our cope for type inference here.
This pattern applies in so many places in Android. Figure out
how this works. You figure out the type exactly once on the
airable captor and type inference takes care of the
rest. The type T is inferred for the function call and since
it’s the ray able to – reifieding with , … an
extension lambda is a floating extension function, and it’s
pretty powerful. I’ve already showed you how to declare them.
Let’s talk about how to call them. To call an extension
lambda, we apply it to an object. In this example, we’re
going to call the object “verify” on the object capital
Europe, because verify is an extension lambda, it will pass
captor as the implicit parameter to the
function. This is really powerful, allowing us to build
APIs that look like language extensions. Of course, the cut is going to
want to use the captor outside our block and pass it back and
come back to our test. Here, we can see how it looks to apply an
extension lambda. The argumentCaptor has implicit
this, or we can say it explicitly. Extension lambdas
are a really powerful way to define APIs. They help us clean
up code whenever the receiver of the lambda is obvious. In this
example, it is clear that the block is operating on a network aptor and the
capture method makes sense with a call as implicit as this. We
don’t want to make lambdas for every API and naming – for the
second lambda in our replay call function, it is not obvious at
runtime or at calltime what the argument is approximately. So we
prefer to let the caller name it. Just by looking at
replayCall, we can see it is a function with one argument and
then replays the captured call. Passing the captured argument to
the lambda. In this case, the captured argument is a callback
so we name our variable appropriately, but in other
cases, it might be a string, an integer or other kinds of
objects. We’ve done all of these things before, so let’s go
ahead and write that out in code been we first define an
extension function on any argument captor.
It takes a function argument with the single argument of type
T and since we only want to replay a single call, we can
assert that there’s been exactly one call to our argument
captor. You can see it is using the
implicit this from the extension replayCall to get all values on
the argument captor. And finally, it applies the function
argument to the captured value from the argumentCaptor. All
totalled, that was 14 lines of library code which will simplify
of of our tests in Kotlin. It uses lambdas, extensions, and
other powerful features in Kotlin, to turn this code into
this. We can write it like this throughout our entire code base
going forward – much simpler. I want to mention here the API
was inspired by a library called Kotlin Mockito. I think you
should check that out. Putting it all together, we’ve seen that
an extension functions are a versatile and powerful feature
in Kotlin. It provides us with a highly expressive API building
blocks like inline and reify, and, when we go higher order,
Kotlin really shines in its last parameter call syntax and
extend the boilerplate to lambdas.
That’s all the features of Kotlin that I’m going to cover
today. There are many more, and you can check out the
documentation to learn them. Before we go to lunch, I would
like to take a look at what is new in Android and Kotlin. Dan
covered this in the keynote, but here’s a recap. We published
the guides to provide a reference for Android Kotlin
style and interoperability with Java. We started to add
annotations to make the API friendlier in Kotlin, so null
checks are valid when you call the support library. We’ve
continued development in this area. Of course, you: there’s
more Android developers every day using Kotlin for work and
for fun. The Kotlin open-source community is still getting
started, so now is a perfect time to start a Kotlin willing
project or work on an existing once. Check me out after office
hours training tomorrow if you want to learn more about Kotlin. We’ll be back shortly. ime to start a Kotlin willing We’ll be back shortly. We’ll be back shortly. We’ll be back shortly. I’m going to start in about a
minute here. I just wanted to say hi to
everyone who’s here. I know that food is really
tempting there. How many people here have
actually played — downloaded the new Android?
How about Support Library 27? Mostly hands.
To me, the Support Library is the most important thing we do.
Next to, you know, maybe architecture components S .
First of all, I’m Dan Galpin. I’m going to be giving a tour of
the Support Library updates and we have a lot to cover.
Android 8.1, how many have downloaded an 8.1 image?
It adds a lot of targeted enhancements for Oreo.
Android Go, for example. Including the elimination of
these two services. Notification lister and
condition provider. It allows you to contrabutte.
We have TensorFlow Lite to distribute workload included
dedicated Neural Networks API. If you don’t have these things,
it will execute it on the CPU. You’ll use the internal network
and TensorFlow Lite or Caffe2. We’ll have a talk on TensorFlow
tomorrow. 8.1 adds a shared memory API.
This is a way to share data. It gives a shared memory
instance. It can be controlled on the
Parcelable shared object. And once you don’t need access
anymore, you unmap the buffer. We support this through the NDK.
You can unmap and read and write.
We have new APIs for colors in 8.1.
You can use the WallpaperManager to get the most represented
colors. It will adjust the colors you
use based on the API. If you use a new wallpaper,
whenever your colors change, you return it.
I recommend using utility functions to automatically
select these. Picture in Picture.
How many people have played with Picture in Picture?
You tell Android the activity supports it and we enter Picture
in Picture mode. We have params here.
We are setting the PiP window.
Use MediaSession, but you can create custom actions with
icons. You can provide these buttons.
You can enter it by overriding on user leave hint when your app
is doing something really important, like a video chat or
playing a video or navigating. You get callback on your
transitions. Your activity’s moved to a new
task when you enter PiP mode. You lose the activity stack so
you want to re-create it. If you’re a single activity app,
this is really simple. We support multi-display.
They can be launched on the app by secondary displays.
They have configurations and resource management. Your targeting I/O, you can
letter-box your app. Make your app work properly with
long screens. We have adaptive launcher icons.
Android shapes are random. They shrink the icon or use a
different icon or a random color shape.
We have adaptive icons. They provide a background and
foreground layer. A circular shape will look like
this. Your shared sheet dialogue.
And a support range, we recommend that apps have 72dp in
size. So, it’s important, the
third-party render wants to do it, it’s not super sampled.
We want 108dp. We want to pad this with extra
image around all four sides so we can do cool animations.
So, you can control your brand looks, which is great.
It’s really important to add adaptive icons.
We use an AdaptiveIconDrawable. It is in background tag.
You can do this — if you don’t want to waste space, you can use
this new inset tag which allows for fractional insets which is
pretty cool. So, you could use this 16.6.
You can pad your 72dp icon and that reduces the amount of size
increase you would have. In Android n, you can actually
use a vector image, which can really, really help keep things
small and make things great everywhere.
It has a bunch of changes to notifications.
In previous versions, users can only block nottifications.
We have categories of notifications that have the same
behaviors. They can click here to see all
the categories. Clicking on a category, it has
vibration and sound. We added dots in the launcher
with a low stress way. Also, you noticed, we can
install widgets this way. All the things you used to
customize applies to the whole Channels: So, a bullet point
slide. Make sure to set up your
Channels with user happiness as your goal.
Of course, you can use notification path to set the
notification. If you’re in Oreo, you need to
have notification Channelss used.
So, in Android 7.1, we added launcher shortcuts.
there was no indication in the app that this worked and it was
not great for users and developers.
We have a new shortcut as an Android 7.1.
It asks the user where to place it.
Custom shortcuts were in 7.1. You can add from the widget tray
with an optional configuration screen.
The Oreo API is an upgrade there.
In 7.1, the app returned the shortcut is directly in the
activity return and we wrap that all up to update the shortcut.
We can use compats to do all this.
We have a way to surface app, which is in your app, which is
awesome. You can actually request, have a
button say, hey, install the widget.
We’ve added autofill to Android. There are services, as well as
what you can do in an app. Standard views automatically
work. You can make it work better by
providing hints. You can mark fields that
autofill should ignore. API 27 version wraps the
autofill methods. So, yay.
So, you can request autofill. Like this.
On demand. And you can use it to completely
custom views. You can use it with Vulkan
orcanvas. Please consider integratng with
autofill. It is really cool and it does
help give a nice, warm welcome. Another thing you can do is you
can link it autofill with your website.
So, you can do this by creating this JSON file in your server.
You modify your manifest, which points to a JSON resource.
And that’s it. These two steps, you share
credentials between your app and your website and this works for
autofill and it’s the first step, which lets you
automatically assign to your users.
Autofill’s pretty cool. It works mostly automatically.
We have StrictMode improvements. It lets you set threadPolicy and
Unbuffered I/O and vmPolicy will detect sockets and when
content is sent that doesn’t grant the writer permission.
We have seekable viable descriptors, which is useful for
larger mode sources, such as big audio and video files.
We’ve add proper support for caching.
It takes deleteable cache into account.
It is great. There’s tons of other stuff in
o. You can add an accessibility
action, paging and content provideproviders to make the
runtime faster. But, let’s turn our attention to
the Support Library. One of the things you can note
here is that it devices running less than 14, we got rid of
Gingerbread and honeycomb. If you need to support versions,
you can. This gives us a bunch of
benefits and we’re going to deprecate more that we’re going
to remove later. You’ll notice deprecated APIs.
Pivot x migrates away from the deprecated methods, otherwise
you’ll be surprised when you get the new version.
If you’re using the Maven Repository, you specify like
this. All right.
One of the things I’ve been waiting for, forever, is the
support of custom fonts and to do this in the old world, you’d
have to load the Typeface and use the custom TextView
everywhere. This was no fun.
So, we have a new resource type for fonts that accepts a single
font as well as families. You can have families, which
include a whole group of fonts that work together.
You can generate font Familys.
This is super easy to use in XML.
It is reused. It can handle families.
It supports attributes and styles.
Again, it’s in Android o and in 14+.
Another reason to move to the latest Support Library.
We have downloadable fonts because fonts are big.
They bloat your apps and now you don’t have to bundle them in
your app. The font prefetcher puts them in
your apps. You get to save space, make your
apps smaller. It’s shared between apps.
So, this is great for the user, as well.
Again, we have over 800 fonts in Google Fonts that are all
supports. I highly recommend using it,
especially if you know you’re going to be targeting devices
that have Google play services. You get a callback for success
or failure, pretty straightforward here.
You can use fonts, contract and pat.
It’s really easy to do this in XML.
Again, the best part about this is we’ve totally integrated this
in Android Studio. You can search for the font and
select it and see the font in your layout.
It is pretty darn cool. So, check out the sample app,
the Google font app on DAC for all the info and once again,
14+. We have the emoji compat
library. So, it’s time to get rid of this
tofu and these empty boxes. We can make emoji show up
properly on older versions of Android.
It checks per glyph and replaces it and there are two ways to do
this. You end up making your font
request and initial on create. Or you can bundle it in your
app. It’s seven megabytes.
This allows you to work well with devices that don’t have
Google play services. You also have to use
EmojiTextViewEmojiTextView. Now, we have unicorns and Tacos.
Yay! So, check out the sample again,
here. Of course, this is API 19+, it
does require Kit Kat. Sorry about that.
This is a feature I’ve wanted in Android.
How many have wanted TextView resizing?
I’m kind of a font nerd, so I studied it when I was in school.
You can use the auto size text time uniform or get created and
set — create a preset set of sizes for array of values: This
is in API+. I’m so happy we’ll see this now
that we’re supporting it. We’ve fixed vectors.
This actually is not as bad as it might look.
How many people have had this with vector drawables?
They use these even/odd fill rule.
There was a big to-do in the source code saying, fix this.
We didn’t fix it until Android n.
It decides which ones are inside and outside.
We support this in the Support Library in API 14+.
So, this is pretty cool. The same XML looks great and
this is really awesome. Another thing that’s kind of
cool is we’re supporting path vector
drawables. So, again, you can do this to,
you know, turn your animals like this.
This was done with Alex Lockwood’s Shape Shifter, which
is awesome. We start with our vector XML
defining the starting image. Extracted all the data.
Then we have an object animator, which is pretty cool.
So, here are the path values, morph for a buffalo to a hippo.
It’s easy to do this in code. The other thing we can do is we
can actually use the bundled XML format to put this into one,
giant file, which is pretty awesome.
Instead of having to keep track of all these different files,
you can do it all together. It packages everything into one
file, which is cool. Now I have this cool morph.
This is down to API 14, so you can do stuff like this which combines path morphing.
We can give the square an acceleration curve.
The same path we use for vector drawables and here is our XML
definition for that. We use the same one for API 14
as we do for 26. It snaps back a slow amount and
we have our morph animation. It starts from a square and goes
down to a point and we set this interpolator.
Think about using this. It really does help if you’re
not reuse can — reusing it somewhere else.
It slowly tapers down. We did this transition support
library. This is actually pretty cool.
API’s from Lollipop and above have Propagation and this is all
available in transition XML. You can use the same transition
XML on API 14 and above. This is pretty slick, actually.
I highly recommend taking advantage of that.
Now to get into deeper stuff. How many people have applied
with physics-based animations? My friend, Lisa, wrote a lot of
these slides for me. This talks about about real-world forces.
You used to have to do an approximation of forces.
Physics-based animation allows motion to be correctly simulated
in the UX. It makes all of this easier, to
make real motion happen in your app.
The first one is FlingAnimation, you start with an initial
velocity and start with friction and end gradually.
It is cool. Here’s pretty much the simplest
one you can make, follows a view, transition-wise, it is
changing. Everything else is default and
that’s what you just saw there. We can actually customize the
friction. The higher the friction, the
less distance your view will travel for a given velocity.
We also have the SpringAnimation type.
This refers back to the end point of the string.
Every time the color changes, that’s a new spring animation.
This is the simplest one you can make.
Its final transition is at equilibrium.
However, you can actually customize this by calling spring
on your animation or by applying it and setting the
damping ratio and the stiffness and the final position, which is
also important. The result — the default here
is MEDIUM_BOUNCY. The lower the number, the more
you’ll see, aka, bouncing. At one, critical damping, no
bounce. Do not underdamp your views
because this crazy bouncing is kind of like, maybe I shouldn’t use this.
For giving starting velocity, how far will it travel from the
end point and how fast will it pull back?
This is now bounce, you can see the stiffness better.
The lower stiffness is on the left.
You can create an external springforce.
You might want to allow user input.
Velocity tracker is one option. It has been around since API 1
and it does what it says, attracts the user’s velocity
from the user’s stuff. You can use a GestureDetector,
you can apply velocity to the ball.
You’d use this in a touch listener.
You call VelocityTracker.obtain().
And then action up, you call compute.
It is X and y. You’ll start two Dynamic
Animation. One for the X velocity and one
for the y velocity. And still give you a
smoothly-moving, interacting two-dimension animation.
Careful flinging those balls because when you lose them, they
don’t come back. They have end listeners.
So, let’s chain these two FlingAnimations.
We create two at the same time, one for X and one for y.
We’re going to stop the first animation and create a new one
in the first direction, fixing our problem with the balls going
off the screen. We’ll add an update listener and
an end listener. There’s probably a better way to
do this. If it did, we’ll cancel the
animation. We’ll get a callback to
onanimationend. As you can see, I made an
extension view. I was tired of typing and used
the velocity in the opposite direction.
Check out the sign. It’s not exactly this simple.
This flings the ball straight back at you because it reverses
both the velocities x and y. On the other, it’s going to keep
traveling in that direction. That gives you that 90-degree
bounce angle. If it’s the horizontal edge —
this is a sticky effect where it sticks there with a spring
bounce. It’s going to look something
like this. We’re using the ball’s current
translation value as the end value for the spring.
So use the oscillation and reach equilibrium.
There is a cool one in the Google talk.
Here’s a real layout. It is by Nick Butcher, whereas
part of the screen, they translate up subtly.
Acceleration and deceleration are all hard-coded.
It is done with the same string we saw.
What’s actually going on here? These two screens have almost
the same effect. The top ball is the lead and the
other two balls follow each other.
The lead view in the chain is the headline and icons.
The paragraphs below are following it and the FAB is
following the paragraphs. Instead of a touch listener, we
started with a spring. Instead of giving it a velocity,
we give it a start value. We say, pull this string back
this far. We made a SpringAnimation to the
paragraphs and the listener, they call animate to the final
one. It sets the new final position
to the current position of the headline animation and it starts
the paragraph animation, if it doesn’t yet.
Finally, we do the same thing for the floating action button.
The difference here is what we were doing before with flings is
they were subsequent and these are happening simultaneously.
On a scale of floating action button, what else can you
animate? A lot of built-in properties.
You can think of springing and flinging off the top of my head,
alpha translation, scroll, scale, X, y and z, which are
absolute positions. If you want to animate something
crazier, if you want to scale X and y together, you make
FloatPropertyCompat so it knows when to stop.
It doesn’t go on animating because it doesn’t know.
You can also restart every animation and give them a
minimum and a maximum. It’s worth a shot.
Cancel your animations. If you use your animations while
they’re running, it will crash. These APIs are in the support
library. It’s available from — in this
case, for Jelly Bean and above. So, not 14, but 16.
All the code for this is in Lisa’s GitHub.
Hats off to Lisa. What I wanted to get to you —
to understand, these physics-based animations are not
toys or just for games and they’re way more than bouncing
balls. they’re a great way to bring
natural motion and interaction to your current ui.
So, certainly, I look forward to seeing what you make.
And, so that’s it. I hope that this was a wonderful
overview for you of a little bit of what we have in the
support library and Android and enjoy the rest of your day here
at GDD India. Thank you. Meggin Kearney: Wow!
Hi! I’m Meggin Kearney,ium manage a
team of technical developers.
I have two kids, Amelia and Patrick.
I’ve been an engineering student, learning to build my
own web applications. One is called Village Assistant,
which is the topic of this talk today.
And this is Crystal Lambert. She couldn’t be here in-person
today, but I wanted to virtually introduce you to her.
Together, we spent many, many long hours building Village
Assistant as our final project at UC Berkeley.
So, when I talked to the people about building Village
Assistant, one of the main questions that people ask is,
why would you choose Google Assistant?
Why not build all your logic inside a web application?
And the answer to that question is basically a very personal
story. I live in San Francisco.
And like many San Franciscans, I wasn’t born there and my family
is pretty far away. So, my close circle of friends
are much more than my friends. My kids call them our family.
A year and a bit ago, one of our family members died of cancer.
Sorry to bum you out, but this is really important to Village
Assistant. And he left behind his wife and
three small kids, so I really wanted to help my friend, but I
had no idea how to best to do so I created a Google group for a
circle of friends. The idea was our friend could
contact the group and get whatever she needs.
She’s a lot more capable than I am.
She’s a lot more organized and a lot more relaxed with her kids.
I scream at mine all the time. And she’s the one that usually
helps us. It’s a really strange thing to
be strong and capable and all of a sudden you need help, a
couple of weeks after she was trying out the Google group and
we were having dinner and the kids were going crazy in the
background, I asked her if the group was useful to her.
She said, I tried the group situation a few times.
But I have to be honest, it’s kind of a bit more work than
it’s worth. She walked thee through this
example. Around the kids bedtime, she
went to the refrigerator to get milk and realized she didn’t
have any milk so she reached out to the group to see if anyone
could get her any milk and she experienced this weird, anxious
feeling. She was worried about not having
milk the next day. She also then got this flood of
people saying, I can get you milk and so she found herself
trying to coordinate with people so she didn’t wind up with 5-10
gallons of milk the next day. The other thing that was really
tough, she told me, was people who couldn’t get her milk still
contacted her, wanted to know how she was feeling, wanted to
arrange some other need to be met in the future.
So, reaching out to the GOOUK group for something as simple as
milk turned into something far more complicated and unrelated
to the direct need. So, after the conversation with
my friend, I spent a long time thinking about how we’ve grown
accustomed to communicating with each other in social circles
and we’re really comfortable exchanging goods and services
with strangers and get in cars and they drive us around and
we’re cool with posting pictures liking other family’s posting.
We’re able to raise funds for causes through crowd-sourcing,
but it’s still really awkward to make specific requests to
people we know and care about. I found this quote by Marshall
McLuhan, who came up with the term global village.
He spoke about how technology advancements would vastly expand
the global village, but also would flatten it, kind of
diffuse the whole essence of what it means to be part of a
small community. When I started working on
Village Assistant, one of the things I wanted to do was give
the user the ability to ask for help for any specific they need
they had to smaller, targeted communities and I wanted to
soften that awkwardness that comes with asking for help and
Google Assistant provides a filter between users and their
communities. So, a user can make a specific
request to a chosen group and Assistant coordinates. Using push notifications, user
can decide to help or not without having to engage in
lengthy conversations outside of the direct scope of call to
action. So, here’s what is interesting.
Village Assistant is both a Google Assistant app and a
Progressive Web App. It’s built user react.
The more frameworks and platform platforms together, the more
interesting the apps can be but it’s challenging.
It’s kind of interesting looking at these products.
You get to see how they all work together.
So Google Assistant allows users to have a conversation with
Google in order to get things done and actions on Google let
developers like me extend Google Assistant so that user can have
a conversation with Google about your own app.
So, in my situation, users can connect to Village Assistant
from Google Assistant by asking to talk to Herald and he’s his
persona that Crystal and I created.
We bit a WOOUK to our Progressive Web App.
User has a conversation with Assistant, the WOOUK sends it to
and from the Assistant app and it’s built with Firebase
functions. Village Assistant also uses Firebase hosting, Cloud Firestore and
realtime assistance. So, for someone like me who’s
learning, it’s easy to get up and running fast with new
products. I really came to love that
Firebase has one counsel. When I was watching, I could
watch my data and service logs and hosting and deployment and
do it all from the same place. So, one key requirement for
Village Assistant is to create direct active and easy
engagement between users and the villages.
So I decided very early on that push notifications would be the
best way for users to stay active and engaged.
When a user goes to the Village Assistant Progressive Web App,
all they do is log in and receive a token and they can
create or join villages and a token gets added to the village
and our service worker in the Progressive Web App handles to
data, sends the token in the village and listens for push
events so it can let the Google Assistant user know who’s able
to help or not. So, I’m now going to attempt
something incredibly brave, I’m going to do a live demo.
The wifi’s holding up okay. Google Assistant seems to be
behaving. I also have videos and slides
and I’ll walk you through them. I’m really hoping this works. So, the first thing you want to
notice is this isn’t the first time I’ve been to this page and
you can see here that Chrome is saying, hey, do you want to add
Village Assistant to your homescreen?
I’m actually not going to do that because I just want to mess
with any OAuth issues. I could do that and relaunch and
it would work fine. At this point, I’m actually
going to log in. Oh!
Sorry. And, after I log in, I’m going
to hopefully get a message that it will ask me if I’ll allow
notifications. And I’ll allow them.
And then I’m going to go ahead — I created villages earlier so
if anything went wrong, it would be a bit smoother.
I’m just going to go in and view some of the villages that are
there, I’m going to join them. And everything seems okay.
So, the ui is really more of a demo ui because in the future, I
want to move most of this functionality into Google
Assistant so you can control your villages.
You can invite users. You can add yourself to a
village straight from the conversation.
Okay. So, now we’re going to try to
talk to Herald. So, let’s see what happens.
Okay, Google. Talk to village Herald. All right, let’s get the test
version of Herald. Hello, Herald here.
Would you like to get help or check on responses.
Meggin Kearney: Okay, Google, get help.
>>Great, let’s get you help. Meggin Kearney: Okay, Google.
I need milk. Select best village for your
need. You selected —
Meggin Kearney: Hopefully we will see a push notification.
There it is. Now, we’re going to say, yes, I
can totally help. And I’m going to add a message
and say, I — I’m typing really slow, here, because I’m super
nervous and it’s not easy to type when your hands are
shaking. Okay.
Here we go. All right.
So, that’s part two. Now we’re coming on to the final
part of the demo. Please work!
Okay, Google. Talk to village Herald.
All right getting the test version of village Herald.
Hello. Herald here.
Would you like to get help or check on responses for previous
needs. Meggin Kearney: Okay, Google.
Check on responses. We need to retrieve your active
needs. Sound good?
Meggin Kearney: Okay, Google. Sounds good.
Select need for updates. Here’s an update on getting help
with milk. Meggin Kearney: You can see,
it’s recording the possible responses.
Who said yes. Who said no.
Now we’re going to click to chat.
And, you can see a message from only the person who said they
could help.
I actually really like almond milk.
All right. And that’s it.
That’s the end to end workflow for Village Assistant.
[LAUGHTER] I know, I feel the same way.
So, the really cool thing is the next few slides, we don’t
really have to look at them because the demo worked.
W OOSHGS — woohoo.
You can look at these slides in the future and get a better
sense of how things are working. I linked to the site so if you
want to play around with it, you’re more than welcome.
So, okay. Let’s move through.
And so now, I’m going to take a little bit of a deeper dive into
some of the implementation specifics, starting with Google
Assistant. One of the most interesting
parts of working with Google Assistant has been learning to
code human conversation. And it’s kind of like learning
how to talk to your kids. Does anyone here have kids?
All right. Okay.
You’ll get this then. So, I have a daughter and she
loves to talk and since she was very young, we would have these
really cool, engaging conversations back and forth and
I learned to ask her lots and lots of open-ended questions.
So, for example, in the morning, I would say to Amelia, I would
say, hey, Amelia, what do you want for breakfast?
She would have spontaneity. My son, he wants his needs met
immediately. That is the bottom line for him.
So, when I ask my son what he wants for breakfast and he says
chocolate, he doesn’t like when I say no.
Instead of asking him, hey, Patrick, what do you want for
breakfast. I ask him based on a set of
choices that I’ve already checked are available.
So, my first attempt at conversation Assistant was kind
of like asking, hey, what do you want for breakfast?
The more I coded conversation, the more I realized the value of
giving users choice and directing the flow of
conversation based on their selection.
So, I use Dialogflow to build our conversation and one of the
trickiest bits to building a conversation was figuring this
flow out. And the thing is, I didn’t
discover context parameters until later in my learning and
these were exactly what I needed.
So, at the end of the talk, I provide links to the context
docs and to the FactsApp sample and these are incredibly useful
things I wish I found in the beginning.
They show your conversations and they also show you how you can
direct the flow with context parameters.
So, another aspect of building an Assistant app was figuring
out the interaction between Google Assistant and our server
code. So, if you start playing around
with functions and Google Assistant, you will notice,
fairly quickly, that Google Assistant sends a lot of logs to
the server. Like, I mean, a lot of logs.
And it takes awhile to get used to parsing the useful
information from those logs. If you’re writing your own on
server code and you go to find out what they’re returning,
you’ll find yourself scrolling down the pages trying to find
them. You get really good at it, but
it takes some practice. Lastly, this is the final bit.
If you haven’t worked a lot with OAuth RR and account linking,
prepare to spend a lot of time figuring out how to link all
your accounts together. OAuth is hard.
One thing I know about Google is it takes security very, very
seriously. So, you’re going to have to use
tokens. Like, you can’t — there’s no
other solution, you need to know how to pass your tokens.
And you’re also going to have to figure out smart ways to link
users in different parts of your end to end workflow together.
In all honesty, I had to get help from an Assistant engineer
to parse. Shout out, thanks very much,
Shuyang, for that help. After this talk and rest, I’d
really like to spend time diving into the token exchange and
properly understand it. Okay, so, now we’re going to
talk more about building Progressive Web Apps and
hopefully most of you have had a chance to attend the talk on
Progressive Web Apps today so you know about service workers
and caching resources off line. I think there are a few things
worth calling attention. Don’t wait to use Lighthouse.
It’s much better to run Lighthouse early on.
If you haven’t heard about Lighthouse yet, it’s a tool you
run on your site and it tells you a bunch of stuff in your app
you’re already doing well and it tells you stuff — really
specific stuff you can do to make your app progressively
better. There’s a Lighthouse talk today,
you should totally go and see that.
The one thing about the Lighthouse results, it’s really
easy to fix the progressive web app.
The harder is to fix the performance.
I linked to a blog post which gives you a pretty good, get
started, with how to roll with Lighthouse and debug your
progressive web app. I mentioned that we built our
client with React. Christal rendered it using the
create React app. We hadn’t quite worked out how
to cache the static resources that are minified in your build.
We said, we’ll try it out and try the latest update and it was
like, oh, my gosh, my server was created for me and my status
resources were inside a service worker properly cached.
It turns out that one of my coworkers, Jeffrey Posnick, went
and added workbox to the app so you don’t have to do anything.
You run your build and it does most of off line support for
you. You don’t have to use a create
React app but you should use workbox so you don’t have to
wrap your head around it with cacheing.
Tada! That’s my Lighthouse score.
Woohoo. It’s not perfect.
There’s a lot more that we can do, but it’s not bad.
True story, up until kind of very early in the morning
yesterday, my performance score was more than 20 points lower
than that. And that’s because our app uses materialized css which had
a mandatory jQuery dependancy and we were using it to activate
our drop-down menu. So, I’m happy to say that
literally two weeks ago, materialized css released a beta
version which removed the jQuery dependency is and it
tries to clean up a lot of bloat.
So, I went, yesterday, and implemented a React function
and, woohoo, the score went up. So, I’m kind of happy about
that. The materialized css has staff
support. I can enter all my css server
side and that’ll make the score go even higher.
So, developers love to save time and that often means using
libraries so we don’t have to write all the code.
Linking to libraries in a website seriously affects
performance. I think it’s fair to say, me
trying to test my demo today in really bad wifi, we’re very
impatient with waiting for things to load.
And I think, as technology gets better, users are only going to
go to sites that load fast or fast enough in an environment
that they’re in. Many developers who are building
libraries are genuinely doing their best to help make it
possible to take care most of the heavy lifting server-side.
They’re trying to make it so their library isn’t going to
hammer your first page load. But I think we have a
responsibility, too. We have to think about our users
and make sure they’re getting the information they want as
fast as possible. So, there’s a balance here.
Now, Firebase products. I really love Firebase.
I do. I really love Firebase.
But I wanted to briefly cover a couple challenges we had with
Firebase. There’s a subtle difference
between Firebase notifications and how web push notifications
for structured and we wanted to create the custom actions inside
our notifications so users could click on them and we could
record the responses. All our push is handled by the
service worker. But the Firebase notification
API doesn’t include an actions parameter, so we had to pass in
our actions as strings of data and then parse the data in a
recognizable way toward the service worker.
It’s a little tricky to get used to, so Matt Gaunt and I documented how it went
together on Stack Overflow. Firebase functions work great
with Google Assistant apps. But you got to remember to check
your quotas. You’ll notice there’s so many
logs coming in for Google Assistant and if you’re doing
your on logs as well and as your server starts to get more and
more complicated, you’ll start to notice that your logs just
stop coming in. You can’t see anything.
And this happened to and we were like, what’s going on?
What is the bug we introduced? It turned out, we reached quota.
I updated to a higher version of Firebase.
It’s a good thing to pay attention to if you’re going to
use functions with Google Assistant.
Finally, the last thing about Firebase.
We first created Village Assistant, there wasn’t the
Cloud Firestore, just the realtime database.
The database worked well, for the most part, but it was really tricky to query data.
It removes the trickiness. If you have a Firebase realtime
database, I can tell you, you can do it incrementally.
We did this in a couple of days by looking at each of the data
exchanges one at a time. Keeping the realtime database in
place while we switched over to firestore and it worked okay.
So, we’re getting close to the end of the talk and I just
wanted to leave you with a couple to-do lists.
I love to-do lists. The first one is my to-do lists.
I put a link here to our code. A lot of the stuff we’re using
in Village Assistant, it’s beta. All these things are really,
really young. Yeah?
So, eventually I do want to move that code to a more official
Google GitHub repo. I want to mention some of the
key issues I know are there so when you’re looking at the code
and you’re like, oh, my god, this is terrible.
The way we match villages to a user is just not that pretty and
I know there’s a much better way to do this.
Ideally, I would like the user to enter in a village name and
we could find the closest fits. In the super future, way app is actually a real thing,
I’d like clever logic in identifying the best villages
for which users are close or inside a store to get you milk.
That’s, like, pretty far away. The other thing that’s wonky is
the ui where you create and join villages and so, when I
originally envisioned doing kind of the village creation and the
invite of users to your village, I wanted it to be
inside Google Assistant. I just wanted the push to be the
main ui for the Progressive Web App.
But it turned out that there’s some security limitations in
doing that right now. And I think eventually the
Assistant platform will evolve and that’s something I really
want to implement. I will mean up the PWA i so you
can invite users. I want to make my response times
faster. I want to make the WOOUK faster.
I also want to create pretty simple test suite and clean up
my code. One of the things, I think it’s
very worth mentioning, when you start to build apps that bring
all these products together, creating test suites are pretty
challenges. I mean, you can write a test
suite for things in isolation but if you want to test how they
all come together, there’s a lot of complexity in that so
that is something I want to take on-board.
And now, this is your to-do list, should you take on the
challenge. You know, for sure, start by
creating a PWA and host it with Firebase.
It’s very easy, I wrote a blog post that claims you can do it
in five minutes or less. Make sure you run Lighthouse to
test and tweak your Progressive Web App.
And then learn to build conversations with choices.
There’s the link that I mentioned before.
You want to create a webhook and learn how to get that dialogue
going on between your PWA and your Assistant app.
Add push to your PWA. I linked to the best get started
video and the Stack Overflow discussion that Matt and I had.
Uous get used to it once you know you have to pass in the
string and parse it. It’s worth exploring and playing
around with it. And, that’s it.
So, thanks very much. want to learn more about Kotlin.
be right back. We’ll be right back.
We will be right back.
We’ll be right We’ll be right back. We’ll be right back. Hi, everyone. My name is Shailen
Tuli. I’m a developer programmer engineer at Google. I
will be talking about writing battery-efficient Android apps
that use location. Location-based apps are
absolutely everywhere – transportation apps, geoapps,
navigation apps, weather apps – even dating apps all use location. Yet we have the sad
spectacle that, all too often, users simply turn off location
on their devices which means a lot of these apps either don’t
work at all or they work in a degraded manner. And why do
users do this? Because fairly or not, they associate location with battery drain and they
think turning off location will help reserve battery. This is
bad for developers who write the great apps, bad for Android as
an ecosystem, and, of course, it is terrible for users. So
location is used a lot – we know that. Location APIs currently
allow developers to request location at virtually any time
and make progressive location requests with no barriers. When
your app is in the foreground, when you have an activity that
you can see, and it’s going to be only is a short time, it
doesn’t matter too much what you’re doing. When you go off in
the background, that’s another story completely. Background
location has been identified by us as a major contributor to
battery drain and power issues. Aggressive use of background
location is a major reason why people disable location on their
devices. So, in a response to this, and this has been a
persistent problem for many years now, in a response to
this, the Android team starting with Android O put in place some
fairly substantial limits on the gathering of background
location. Basically, the sort of tl:dr is apps running on old
devices have background information-gathering throttled.
Basically, location is made available a few times an hour,
and that is it. This is like everything running on devices.
Towards the end of the talk, I will get into the nitty-gritty
of what this really means. But for now, that’s basically it.
You just cannot go crazy in the background and do whatever you
want, there are some limits in place. What about pre 0. The
majority are running Android n or lower. What about those
devices ? For the foreseeable future, that is going to be the
case. This talk is fundamentally about identifying best
practices that you could use now in your Android apps when you
use location, so that you’re writing your apps in a
battery-efficient manner. Let’s dive into this.
I’m going to start off this exploration by talking about low
case APIs. After that – location APIs. We will talk
about what is the exact relationship between location
and battery drain and battery loss. Then I will dive into
common-use cases that every developer has to address when
they are writing location apps, and see if we can come up with
some best practices that you can all use in your apps. We will
sort of end with a discussion of the limits that have put in
Android O and we will get into some details on that. Okay, so,
for historical reasons, there are two ways in which you can
get location when you’re using Android apps: framework location
and fuse location. Framework lowlocation is the older one,
been there since the beginning. It is basically Android.location.locationmanager
giving you a wide API surface whereby you as app developers
can decide I want to use GPS, I want to use Wi-Fi, with I want
to use some sensor, and you can get location as you see fit.
This type of location is not optimised for battery, and we
discourage you using this. What we would like to you use instead
is fused location provider. This is available through GMS
core. It is in Android.gms.location.
Fused location provider provides a narrower surface and sits on
top of platform and hardware components. The way this works
is you tell fuse location provider what kind of location
us want: forced, find, how frequently you want it, et
cetera, and it just figures out what underlying technologies to
use and how to do this in the most battery-efficient way. This
location provider is highly optimised for battery, and we
would like you to use this. So, what is fused location? There
are a bunk of its puts that goes into fused location. There is
GPS, Wi-Fi, cell, accelerometer, guy row scope, magnatometer.
I want to talk about what they mean for battery usage. Start
with GPS. GPS works great outside. It has some trouble
with cities and tall builds but in clear skies, it works fantastically, super accurate
location but terrible for battery. That is your trade-off
– great location accuracy but really bad for battery. Then
you’ve got Wi-Fi. The coverage for Wi-Fi is mostly
indoors. The accuracy is pretty good. You can tell using just
Wi-Fi where a person is in a building and what floor they’re
on. The power disruption isn’t as bad as GPS but Wi-Fi scans
are fairly expensive. It is not free; it does cost something.
Then there’s cell. Of course, this is available indoors and
outdoors, available almost everywhere. The accuracy
unfortunately with cell is not so great. You’re not going to
get a location which is accurate to within a few feet, you will
get location to a neighbourhood level or a city block, et
cetera. But it is great for power consumption. It basically
uses very, very little power, so it is fantastic for that. Then
you have the censors which play an extremely important role in
making fused location provider do the right thing and do the
right thing for battery. You have accelerometer which
measures changes in velocity and position. You have gyroscope
which measures changes in orientation in the device, and the magnetometer which allows
you to use the device as a compass. By and large, most of
these sensors have very, very little battery cost. Fused
location provider will use these sensors in conjunction with
Wi-Fi and GPS to use the best as it can with minimal battery
usage. If you were request find location – fine location,
accurate to within a few metres, fuse location would use GPS and
Wi-Fi but GPS and Wi-Fi work of better when you combine them
with sensors. So, for instance, I mentioned GPS is a little bit
jumpy when you’re in environments with tall builds.
Imagine Hong Kong, Mumbai, New York City, where I live, San
Francisco. Those are challenging environments for GPS. When GPS
gets a little flaky, fused location, instead of making
expensive GPS scans, will say, “Let me see what the sensor data
tell me. What is the accelerometer telling me?” It
pieces together a pretty good sense of what it is that is
happening. The same with Wi-Fi. Wi-Fi can be a bit jumpy. When
it gets jumpy, fused location provider will not do excessive
Wi-Fi scans but instead look at the sensor data and look at the
what the device might be doing. Indoor maps sort of work like
that. There was a time when Google maps would give you – if
you went to a shopping Mall, it would say you’re in this mall.
Now it says you’re right here in this shopping mall on the third
floor. It will do things like that. A lot of that is driven by
sensors. If location had to be pulled constantly, if the Wi-Fi
scans had to be constantly done, that would be terrible for
battery. It doesn’t have to do that. Once it gets a Wi-Fi fix,
it can look to the sensor data and look in a battery-efficient
way look where you are – are you turning or moving, et cetera?
That’s basically what it is. The summary of this is, where
possible, given the choice between framework location and
fused location, you should always use fused location. This
is our recommendation. Switching to fused location if you’re
using framework location in your apps is probably the single
best thing you can do in terms of battery performance of your
apps when it comes to lotion. — in terms of location-gathering.
There’s one higher-level API that is the geofencing API, and
that should be an important tool for anyone building location
apps. What is geofencing? It is a case where you can define a
circular region somewhere and say whenever the device enters
or leaves in region, or sits in this region for a certain number
of hours, do something. Let me know. And that basically is how
geofencing works. Geofencing is built on top of fused location
and it’s highly optimised for battery. So the basically the
way it works is the API monitors device proximity to a geofence. The closer you are to the
geofence, the more expensive it is. It basically figures out
what is your speed? Are you in the car? Are you walking? How
far are you from the geofence. It optimises for battery in
terms of among Torrington the geofence in the background. We
will talk more about geofencing later on. All right. So we’ve
talked a little bit about APIs and I’ve given you a little
introduction to fuse location provider. What I’m going to talk
about the relationship between battery drain and location in
sort of a concrete way. I mentioned with the fuse
location provider you have to essentially tell fuse location
provider what you want. You make a location request, and each
does the right thing, and it does the right thing in a
battery-efficient way. So, essentially, what this section
is going to be in my talk, it’s going to be about what is a good
location request? How do you tell fuse location provider what
it should do? So, I would say battery can be measured on three
points, the discussion can be anchored on these three points:
accuracy, frequency, latency. I will talk about all of these
in quite a lot of detail. Accuracy is of course how
accurate is your location? How fine do you want it to be? The
way this works is that you can take the location request that
you create and define a priority. There are a bunch of
priorities that you can choose from, and depending on what you
choose, fused location will give you different technologies
under the hood and give you what you want. The most accurate
state-of-the-art is priority high accuracy. This will use GPS
if it is available, and every time that there’s a trade-off
between accuracy and, battery will lose and accuracy will win.
It is going to give you the most accurate location it knows how
to do. This is a good kind of a use case for foreground, when
you have short-lived activity that’s in the foreground, or
something. This is a terrible idea for background because it
is going to be prohibitively expensive in terms of battery.
Related to that is another priority balanced power
accuracy. This will rarely use GPs but will have GPs matching –
GPS matching and will not. It is better in terms of better.
I would — battery. I would recommend for you writing the
location apps to consider this something as a default. It does
give you pretty good location without burning out your
battery. The next is “priority low power” and this will be
hitting the cell network, not using a lot of Wi-Fi, it will
not use GPS. This will give you course location. You can’t say
I’m a few feet here or there, but you will be able to say I’m
in this part of Bangalore versus that part of Bangalore.
Depending on your use case, this may be all you need, in which
case you should never request more expensive location updates
than this. The most interesting of all is the “priority no
power” which is saying give me location updates but do not
spend any power. How does this bit of magic work? In this
case, what you’re saying to fused location provider is don’t
calculate any location for me. If another app is computing
that, let me know the results. That’s what “priority no power”
means and it’s an incredibly good tool to have because it
doesn’t cost your app anything. So that’s where accuracy is.
Let’s talk about frequency now. Again, it is fairly simple to
understand what this means. The more frequent your updates, your
location consumption, the more expensive it is for battery but
it is a little bit more than that. Frequency is defined by a
set called setInterval. Location services will try to honour
that value. If you say give me location updates every two
minutes, it will try to do that. If you do it every 15 seconds,
it will try to do that. Generally speaking, apps should
pass the largest possible value when using setInterval.
Especially background apps. Using intervals of a few
seconds, 15 seconds, 30 seconds, is is really is something you
should reserve for foreground use cases. Location services
will sort of do what you ask it do do. It is up to you to choose wisely. Now, if you say
location update setInterval for two minute, there’s a caveat
there which is that is just a suggestion. Your location
updates may be a little slower or a little faster. The way that
can happen a little faster is if another app is requesting
location at a faster rate, that will be brought to your app as
well, because this location data is shared between apps. So, for
that reason, we have another method we can call in building our location request called set FastestInterval. It says give
me location, even if it is coming from another app no
faster than what I’m specifying here. So, here’s a little
example. Do you create a location request object, and you
its interval to be five minutes. So at this point, every
five minutes, your app is going to have location computed for
it. But if you also set a fastest interval, which is in
this case one minute, any app running out there that is
requesting location also, that location will be made available
to you but no faster than one minute. This is a pretty good
way of not burning the battery yourself. You’re relying on
other applications to do the work and you get the location
kind of for free that they’re doing. It is a passive way of
getting location, and it’s a pretty powerful way of
conserving battery. All right. So that’s frequency. The latency
is really about when location services has location to give
you, how quickly do you need it? How quickly do you want location
updates to be given to you? So, remember, we talked about
setInterval, when you set an interval of 30 seconds or two
minutes, that’s what location services will try to use as this
interval at which it gives you location. There also a method
called setMaxWaitTime which is a way of having your location
delivered in watches after a few times after it’s been computed. Setinterval is how often is
location computed for you? SetMaxWaitTime is how often
location is delivered to you. Let me make this concrete with
an example. Again, we create a location request and set the
interval to five minutes. This means low cakes will be computed
for you every five minutes. If in that time when location is
found and a new location is found, and your app will be
woken up, and that location will be given to your app. If you
set a max wait time of one hour, something different will
happen. Your location will still be
computed every ITV minutes but it will be delivered to you in a
batch every hour, and you will get 12 location data points, at
least in theory in. Instead of being woken up five minutes,
your app will be woken up every hour which is dramatically
better for battery consumption. Batching is a really, really
good thing to use, especially for background cases where you
don’t want your device to get woken up a lot. If you’re using geofencing, it’s the
setNotification Responsiveness. If you don’t want your
geofencing to be immediate, you can have a window to hold off
before a geofencing result is given to your app. You can set
the responsiveness period to something high, and that is also
a very good thing for battery. This is a classic case of how
you build a geofence. You set a circular region, you set when it
expires, you set what conditions you want, and you
build it. But if to that you add a set notification
responsiveness, and give it a sufficiently large value, that
will make your geofencing all the more battery-efficient.
That’s a bunch of stuff. So rum rice, it is a fairly obvious
thing. The more frequent and more accurate your updates and
the lower your latency, the more expensive it is for battery.
So, in foreground use cases, you could have it all – you can be
frequent, as accurate as you want, as low latency as you
want, but for everything else, you’re going to have to trade
off on one of these or more than one of these, and that’s where
you get to preserve battery. Okay, so that’s a lot of looking
at APIs, looking at API calls. You’re wondering I have
practical problems to solve. What’s the best way to solve
them. Let’s start with an obvious one, do you want to know
the location of a device? For example, you’re a weather app,
you want to show the right weather. You need to know where
the phone is. Here, I would say you don’t get location updates,
you used cached location. Every time location is obtained for
your device, it’s cached somewhere. You can use
getLastLowe occasion. This will give you what you need in a lot
of cases. The API has ways of knowing how stale or fresh this is. If this is not null and
doesn’t look too stale to you, use it. You save a tonne of
battery that way. Another use case: you have user-visible
foreground updates – for example, a mapping app of some
kind. So here, because it is foreground, it is okay to use
high accuracy, high frequency, and low latency. It’s expensive,
but it is okay because, in the foreground, this is pretty much
tied to your activities’ life cycle, and it will end soon. So
typically, what you would do in an activity, is you would
request location updates, but you would also do the following
which is that onStop you remove the updates. Location gathering
will keep happening long after your activities there, which is
obviously a very, very bad thing to do. Okay, another use case:
you want to start location updates at a specific location.
You want to start location updates when you’re near home,
when you’re near work, near a cricket stadium – whatever. So
here, it is a pretty good case of mixing geofencing and
location updates. So, typically, what will happen is, imagine
you’ve defined a geofence around some area of interest. If a
user enters or exits a geofence, location services will let you
know, and, at that point, you can say this is a trigger I was
waiting for, I’m now going to request location updates. A
common pattern for this is the geofence gets triggered, you get
notified, you maybe show the user a notification, the user
taps on the notification, your app opens up to some activity,
and, at that point, location updates begin. Something like
that. Another common-use case, where you want location updates
but you only want them tied to a specific user activity – maybe
when the user is riding a bike, or driving in a car. Here, we
would use the activity-recognition API and
combine that with location updates. It would work like the
previous example: let’s say you were tracking cycling. Location
services would tell you when the user is likely to be on a
bicycle, and you can take that and start location updates to
the notifications that I talked about, something comes into the
foreground, and, boom, off you go, location updates. So things
can get really complex. You’re maybe satisfied with these
simple scenarios I have mentioned, but what if you want
geofencing and you want activity recognition, all of them to
happen at the same time and then sort of somehow combine that
with location updates? We realised that there are coverage
plex-use cases, so, for that reason, we have an exposed and
an awareness API. This is a powerful way. It basically
senses and infers your context, and it manages system health for
you, and it does so in a battery-efficient manner. If
you’re dealing with complex scenarios, awareness API may be
exactly what you’re looking for. It tracks lots of things: what
time of day is it? What is the location of the device? What
are the places nearby? Are there coffee shops or stadiums
nearby? Houses of worship? What is the activity of the
device, the person on a bike is the person on the car? Are
there beacons nearby. It is a person wearing headphones. What
is the weather like? You can take all of these contexts and
treat of larger sense of a fence. Basically, you can easily
react to changes in multiple aspects of the user’s context
and this generalises the idea of a fence well beyond
conventional geofences which of course are just for location.
Here’s an example: do you create a ContextFence and it tracks
three things: you create an ActivityFence which says track
that the user is driving. It creates a Lowe occasionFence and
says track that this geofence may be a stadium geofence or
something like that is being tracked, and then a TimeFence,
make sure it’s between this time and this time. When all you
have these things are true, and your app is in the background,
location services will say, “All the conditions you specified
are true, I’m letting you know you can now do whatever” and
what whatever could include location updates. Similarly,
there’s a snapshot API made possible through awareness, and,
again, it’s a simple way to ask for multiple aspects of a user
con’s context. Again, an example, you find out what the
current place is. You find out what the current activity is. If
the current place is a shopping mall and the activity is
walking, hey, maybe it is time for to you start location
updates so you can tell the user as the user walks what stores
are nearby, or maybe some discounts that you can offer, et cetera. You’re using multiple
inputs and context, and that can get expensive for battery
because you will be run ago lot of different things. If you run
awareness API, you can minimise the battery costs because
awareness APIs are pretty much battery-optimised. All right.
One more use case that I want to spend a couple of minutes on,
which is long-running background updates tied to specific
locations. You want to find all the Starbucks in Bangalore. You
want to find all the ATMs, all the Metro stops in every
metropolitan area. Here, we get into a solution that involves
dynamic geofences. Location services makes a requirement
that you can only use 100 geofences at one time. There are
many more ATMs and Starbucks than just 100. Also, maintaining
100100 geofences is actually pretty expensive. That’s a lot
of scanning that the location services will have to do, and
that’s going to drain your battery. So the solution is
dynamic geofences. Maybe put a geofence around some city. When
that person enters, when the device enters that city,
dynamically registered geofences in locations inside that city.
You have the outer geofences, dynamically, you put in the
geofences. If the person leaves the city, you can remove the
fences that are inside because you don’t need them any more.
This is a way you can, in a battery-efficient way, get a lot
of geofences, get around the 100-geofence limit and actually
do pretty amazing things. Now, the problematic one: we want
long-running background updates with no visible app component.
Think of an app that passively tracks your location for hours
or days at a time. This is a case that keeps people up. This
is a case that is inherently problematic. This is a case
where you get into that problem that I initially referred to
that background location gathering is a major drain on
battery. If you want to do it, how do you do it? Let’s talk
about that. You could run a long-running service of some
kind and then request location updates every now and then. The
problem with that is that if you plan to run your app on an old
device, you can’t have the long-running services in this
background any more. This isn’t any more a good option for you.
The API exposes a method for getting location updates using a
pending intent and that’s exactly what you should do. You
request location updates, give it a location request, give it a
pending intent, and the location
services will wake up your app when a location is found. In
cases like that, what should the location defence look like?
What are you going to do in the background that doesn’t burn
battery? You use moderate accuracy, low frequency, and
high latency. Let’s look at three of those things right now.
You do not want accuracy which is priority high accuracy for
any background use cases. This is bad for battery. For
frequency, I think a good pattern would be to request
updates a few times an hour – let’s say every 15 minutes which
I have in this slide. Certainly, you should try to get
more updates from passive location-gathering. That is why
it’s good to set the fastest interval to a small amount. If
others are gathering that location, you get that lowation
for free. It doesn’t cost you anything. Latency. This is
really, really important. Imagine that you set your
interval to 15 minutes – sorry, imagine if
you set your interval for 15 minutes, if you set the MaxWait
time for one hour, you will get four updates which are
calculated every 15 minutes at a time if you set it for an hour. That’s good for background and
will save battery too. An important use case: what if you
want frequent updates while a user interacts with other apps?
Imagine a fitness app or a navigation app. So in this kind
of a case, you should use a foreground service. This is sort
of the recommendation that we’re coming up with because we
believe that when potentially expensive work is being done on
behalf of the user, that user should be aware of the work. A
foreground service as you know requires a persistent
notification, the user will be able to see, “Ah, stuff is being
done for me, I like it, I approve of this,” or they will
say, “I don’t like it, I will get rid of that.” Either way,
the user isn’t having their battery burned in some silent,
sinister manner. Okay, I’m slightly over time and I will
quickly go over location limits, so you can see that these
location limits are a logical extension of many of the best
practices that we’ve talked about in this talk so far. So, I
think starting with O location story, location gets fantastic,
and it gets fantastic because we throttled back on location in a
sensible way, and this happens on any app that’s running on an
O device. The short of this is the background apps can receive
location several times an hour and that’s it. Your location
request may be something more ambitious than that, but you’re
not going to get that. You’re going to get severity times an
hour. This happens regardless of target SDK version. You can do
batching but again, you will only get location several times
an hour, not much more frequently than that. Also,
Wi-Fi scans are much more efficient in O. We figured that
people spend a lot of time at home, six, eight, ten, 1215
hours at a time, and a lot of time at work, eight, ten –
hopefully, not more than that, but either way, you’re inside a
place, connected to a Wi-Fi access point, and you’re not
moving. In Android O, scanning for Wi-Fi is much more
efficient. There’s also of smarter geofencing in O, so
until O in N and lower scans will default to every few
seconds; in Android O, it happens every couple of minutes,
and we found that, in a lot of devices, that can give us 10X
savings in terms of battery. That’s really great. Pre-O
devices where there are no limits, location is a busy
thing, even on a device that’s actually in the background not
being used. In O, with these location limits in place, things
are much more serene and organised, and your battery
consumption is much, much less. I’m out of time. I’ve gone over
by three or four minutes, but I would say I hope you write
location apps, I hope you write apps that are ambitious, and I
hope you take away from this talk it is possible to do that
and not satisfy battery. Go ahead, build it up. Thank you! [Applause].
ROB: Hello there. Good afternoon. Welcome to frameworks
and tools. I’m a developer advocate for Chrome and Webb.
I’ve come over from London today, so the weather is making
me feel very welcome. Thank you for that. I’m also here for the
rest of the event. This is quite a short slot but I would love
to carry on this conversation with you. If you can’t find me
in person, you can find me on Twitter. Now, what I would like
to do today is explain the state of PWA tooling to you.
Also explain some of the goals and philosophy behind that
tooling so you have a concept of why you’re using it. Talk a bit
about how you can apply this in single-page applications, and
finally give you some best practices as well. Now, just so
I have after bit of an idea of who has been paying attention,
and who is still awake, who has already heard the term “PWA?
Good, that should be nearly all of you, even the Android
developers because it was in the keynote this morning. What I
would like no do is give you some context before we go into
this. One of the questions that we always get is what it the
right way to build my app? How do I built the perfect PWA?
Please don’t ask me this question because of course the
answer is it depends, right? We are all coming from different
situations, some of us have apps already, some of us are
building brand new apps, and also the approach differs for
differing variety ankles as well —
verticals as well. The approach you take – that said, there are
common themes that run through that we have identified and we
try to distil it down into four main pillars that I want you to
think about when building these experiences. First of all, it
needs to be fast. You’ve probably heard the message
because we’ve repeated this multiple times, that every
microsecond of time that you make the user wait, that’s a
user who is abandoning your site, that’s a conversion that
you are not getting. So you really need to focus on making
sure that the experience is as quick as possible. Because we’re talking
about Progressive Web Apps, we are talking about these
high-powered websites, the ones that have taken the right
vitamins, but that means we’ve raised our users’ expectations,
so means they will need to be more integrated with the device,
meaning they will have access of full control of the camera,
access to Bluetooth, I expect location to work and/or
youngation sensors to work. Again, as these experiences become more app diabetic like,
it means users will have more app-like expectations, and that
means reliabilities. These sites need to work regardless of the
network connection. I always should get some experience
useful to me as a user. Final will be, like Sam mentioned this
morning, no matter how many of the magical bells and whistles
you implement, none of that makes a difference if you don’t
have the engaging content and functionality underneath that.
With that in mind, let’s take a look at some of the tools. The reason I want to do
this is what we want to do is put the user first. We want to
create the best user experience on the web and we feel that, to
create the best user experience, we should also give you the
best developer experience. I’m going
to show you some tools in a bit but I want to tell you about
what those tools can help you with first. Top of the list,
like I said, putting the user first within save the users
time, because time is the biggest investment that the user
is making into your app. So you can use tools to start
identifying those areas of poor performance, and apply common
patterns to make those time savings and speed up the
performance of your app. Critical to this, and closely
tied, is also bandwidth. Bandwidth is spending the time
of the user, downing the app, but it is directly spending them
money as well, so make sure that, when you’re analysing your
bandwidth use where you’re taking a respectful approach to
the user’s budget. Then, one of the
things that has been heartening about the web is this idea that
it is completely open and accessible everywhere, right? I
make a web page, and, when I publish that, any client, any
browser, can access that web page. But, in reality, we know
that’s not quite the case. You need to make tweaks for the
different browsers, different things that are supported in
different locations. But that should not be directly your
problem. Again, that’s something that you want to make use much
tooling and libraries to handle the differences between those
browsers so that you can focus on writing the code that solves
your business problems rather than dealing with platform
inconsistencies. When I talked about integration before, we
enable things like push notifications on the web across
browsers, across systems, but there’s a lot of multiple moving
parts that you need to deal with to make those push
notifications happen, but again, it is all standard, so tools
and libraries can help you here. I want to touch on code
generation and how that differs from just using a library, and
finally, I will show you some of the tools that you can use to
make sure that, when you’ve gone through all of these previous
processes, you’re still enforcing the best practices
that you start with a good experience, and you keep a good
experience as well. Let’s look at some of the technologies
behind the PWAs that these tools are going to help you build.
So, first of all, a PWA is normally identified by its
manifest, so this is a simple file that you can think of as
the public description of what your PWA is capable of doing.
Then there’s the service worker. The service worker is the piece
of JavaScript that is able to run in the background and take
care of things for you when your site is not present in the
browser. We will use some of these tools
to build great applications as well which is hopefully what
you’re interested in. Starting with the manifest.
Since it’s a standardised JSON file, there are a few tools that
you can jump and go through the various items, and they will
automate some of the tedious tasks in there like providing
the multiple signs of icons and marking them up correctly, but
it will help you make decisions about what kind of things you
may want to include in your manifest, so you’ve probably
seen that your web app is a standalone web app. The display
modes, what that means for browser support and user options that are out there, and
why you might want to choose particular ones. The service
worker then falls kind of into the other category. It is an incredibly powerful tool but
with this power also comes a lot of complexity. This is the life
cycle of the service worker when it is being added to your
page. It can sit there as a proxy
handling all inbound and outbound requests from your
page. It will be what handles the incoming push requests. It
can also co-ordinate across all the open clients you currently
have on a device to manage communication between them. This
kind of low-level power is incredibly useful, but there’s a
lot that you need to learn to really take advantage of that.
So, again, we can wrap all of that functionality up in a
library, have some code generation to make sure this is
nice and simple for you. I want to show a bit about what you can
do with service worker, so, file caching. You will see there
are a number of different approaches you can take to doing
that. You can either do your file caching on or during the
installation of the service worker, so this is when you
might choose proactively set to choose a number of resources
that you might need to make your site available in an offline
mode. You might also want to do this
at run in the meantime, because it would not be a good idea to
spider your entire site and download it to the user’s twice,
so you may want to fetch whilst users are browsing your site,
cache the content they’re looking at and store that in the
service worker. Finally, there are far more advanced caching
strategies that might be relevant to your particular
business-use case, for example, maybe you have a special offer
on the site that you want to ensure is available for the
user, but it has a hard expiration time so you don’t
want to show it past a particular point. Push note
cervixes, then. Like I mentioned, there are a lot of
standard parts here. You want to do different behaviour. When
the push notification is received, does the user
currently have your app open, or is it closed?
If it is open, you probably want to do something different by
showing an in-app notification versus showing a notification on
the screen that the user will tap and navigate through to your app. Finally on code
generation, then, really what I would like you to understand
here is that code generation is a way of giving you code that is
hopefully fast by default, it’s hopefully great by default, and
comes with a selection of industry best practices built
in, so it should have integration for metrics built in
there, information for analytics, security, usability,
and so on, but that a distinction between a code
generation tool and a library is, when the code generation is
complete, that code is yours. You need to make sure that you
understand the code that has been produced and that you take
it through the same review and testing processes as you would
for code that any of your developers have written. Code
generation can be a lot of fun but don’t assume it is a magic
wand that you can spread over your application and everything
will work. Okay, I want to split the tools for building a PWA
into a couple of different categories. Generic tools, which
we apply almost anywhere, framework-specific tools that I
will show you, and all the way at the end of this line is once
you understand these tools, up start customising them to be
tightly coupled to your own build and application process.
If we’re talking about generic tools, then, the best place to
start is with the browser, because you’re as close to a
user as possible. Now, Chrome has its own developer tools
built into the browser, and we could spend an entire session
exploring this, but I want to highlight two areas that are
particularly important if you’re working with Progressive Web
Apps. The first is the application tab, and
specifically, the service worker entry inside of that. If you go
into this when you’re looking at any web page, you can see the
currently active service workers attached to that page.
You can see what stage they’re in, you can examine what they’re
storing within the console logs specifically for that service
worker as well. As part of your development process, you’re
going to want to be very familiar with this because as
you start to introduce all of this caching functionality, that
means you’re also going to need network that you always – you
need to make sure when you’re getting the latest version of
the code. You can also see in here there’s a method to bypass
for network, so, when you’re developing, you can always just
make sure that the caching for your service worker is
completely ignored and you’re always fetching from disk.
Secondly, who has already visited the Lighthouse booth?
Everyone else, after this, you should go and see the Lighthouse
booth. Put your site in. See what you get out. Lighthouse is
a standalone tool but it exists inside of chrome DevTools as one
of the audits that you can run against your site, so, again,
with if you go to the audit tap inside of DevTools and perform
an audit, then you will get back a score that covers the various
progressive WhatsApp functionality for your site; it
makes a number of performs-related measurements;
accessibility measurements; and a number of general best
practices for websites as well. Hopefully already, you’ve heard
from some of my colleagues about work box. Workbox falls into
the generic tools section. What Workbox focuses on is how it can
build out a service worker for you with a number of specific
patterns. That means it is relatively simple to take
Workbox and apply it into an existing site or apply it to a
new site as well. It gives you three main things, but the team
is always working on bringing additional best practices into
this. First of all, various offline caching patterns, so,
when I talked about the various approaches you might want to
take, Workbox provides these by default. You can do cache first,
you can do network first, you can race the two so you can
respond with whichever is quicker. You can set exploration
policies and so on. It also make available offline
analytics, too. It means when your app is offline, you can
collect the analytics events, and, when you’re back and
connected again, then it will batch those together and send
those on. So this is incredibly important, because, if you make
your app available offline, you still want to know how your
users are interacting with it so you can prove it in the future.
Finally, we make use of newer technologies like background
sync as well, so this means you have a number of built-in
strategies for how you want to refresh content that you are
storing on the user’s device. Workbox is on open source. Find
it on github. If you have something that you think that
Workbox should be able to do, you could contribute code
yourself. Now, we’re not the only people doing this. If you
make use of webpack for bundling or other parts of your build
pipeline, then I would highly recommend looking at the offline
plug-in for a webpack. This enables you to build out a
service worker based on your webpack configuration where you
can specify various assets for caching ahead of time, or
caching when the user browses them or making them optional so
you can cache them when they’re most useful. This is on github,
and there’s plenty of sample code and documentation linked
off that repository to help you get started. The other thing,
like I said, I really loved the open nature of the web and how
easy that makes it for everyone to get involved. I also like it
when companies then start contributing this stuff back as well. Interest has been doing a
huge amount of building a highly-performing web app.
They’ve pushed back their libraries on testing, creating
and experimenting with service workers. One thing that is
interesting about their offering is they include a number of
test harnesses and isolation methods for the service worker
as well, so we sometimes have a tendency with new technology to
get a little carried away with all the cool stuff we can build
and not necessarily cover testing it, so to me this is a
sign that the technology is being much more mature and
production-ready as well. Then, because of how fast-paced this
is, I feel it is important to understand some of the history
as well, so some of the earlier libraries that are still active
as SW3 cache and sw-tool – they’ve been
superseded by WorkBox. You will find a number. Clywd — you will find a number of
the CLI – I would look at Workbox first if
you’re starting off something new of your own. Okay, it is
also always good to have a sort of industry convention or a
benchmark that you can use as well, so most of the time, if
you have a device with a screen, whether a graphing calculator,
a printer, ans on sill scope, the default benchmark is to run
Doom or Quake on it. For front-end developers, it is can
I take your JavaScript framework and write a hacker news client
with it. HNPWA is a collection of hacker newsreaders built in a
variety of frameworks using a variety of libraries. What I
find really useful is that this is a way to compare approaches
in the different frameworks, so, if you’re trying to decide
which one you want to choose to solve a particular problem, or
you want to see how the industry is making use of different
technologies, then this is kind of a great playground to go and
compare solutions to the same problem across a multitude of
tools. Okay, hopefully by now, you
should have a feeling for the kind of mind set that I’m hoping
you can get into. What I would like to accomplish with this,
what what you should be thinking of these tools doing for you is
one, simplifying the mental model. The amount you need to
learn to sort of properly understand and become an expert
in service worker is huge, and that in.” Is probably not part
of the time that you want to be using when you’re doing your
job. Also, a lot of these problems have already been
solved. So there’s no need for you to reinvent the wheel and
spend a lot of time writing a lot of extra code to handle all
of these edge cases. All in all, this is about saving time for
you, and basically getting the best solution for your user.
Really, what it means is that you can focus your effort on the
use cases that are important. The problems that are closer to
your business, not the problems that are closer to the platform.
As well, like I keep saying, the accessibility and the open
nature of the web means that it was really easy for me to get
involved when I was younger creating terrible websites with
pearl backends but it was simple and very accessible. What this
means is by using these libraries, you can lair in
service worker with that same level of simplicity and get
going and get something published on the web. When I
talk about edge cases as well, like the kind of things that
these libraries will handle for you that you don’t want to have
to try and write again yourself, service worker, installation:
if your app loses connectivity during the install, then what
are you going to do? A naive approach here would be to say,
“Well, I had a list of things I was going to cache, so I just
won’t cache them.” Or maybe you actually say, “I didn’t complete
caching my list of things, so I’m going to delete it and try
downloading it all over again.” Neither of these are good
solutions for the user. One gets them nothing, and the other
makes them spend their bandwidth again to get the same result
they could have had. If you use a library here, there are a
number of approaches that can go through and validate the
existing items that you have cached and then pick up and
resume the download when you left off. Again, what if your
user has full connectivity? This seems a little
counterintuitive, but if I’m reading a piece of content and
I’m getting the cache version from your service worker and it
updates in the background, what should you do? Should you force
me to refresh my page? Should you show me a little
notification saying I can tap here to refresh, or can you get
the content and dynamically insert it into the existing
page? All of these patterns already start to exist in these
libraries, so rather than trying to work out how you’re going to
do it, you can just work out which one you want to use. One of the
benefits of service worker and having an additional caching
lair as well, if you don’t deal with an API you don’t control,
you have ability to additional layers of caching on top of
that. If you have an API that doesn’t work well offline, you
can cache the responses from the API, so, when you’re app is
offline, you can act as if you’re getting the last piece of
data available from the API. Okay. Now, I’m reiterating this
again: you’re here to save time, you’re here to save yourself a
lot of code. It is not just developer time, operational
time, bug-fixing, QA, and safe a lot of effort. The reason I
reiterate this is because of the number of times that developers
always seem to want to go official and reinvent the entire
universe because it gives them an opportunity to write a new
JavaScript framework. Please don’t do that. Now, I want to
jump into single-page applications. I also want to
stress here that there are lots of different ways of approaching
PWAs. As you have seen, my colleagues have shown you a
couple, tomorrow, there’s a talk of migrating your existing site
to a PWA as well, but if you’re starting with one of the
single-page application frameworks, then this is a
selection for you to do so. Now, this is by no means a
comprehensive list. Like I was warning, in the time that I’ve
been presenting, someone has probably written and released a
new JavaScript framework anyway. Maybe someone in this room has
done this, actually. I will show you
examples from these five. What I would like to stress is how
easy it is for you just to dip in and get started so that you
can see how these frameworks operate, and you can start to
make a more informed decision about what you would like to
choose for your project. Let’s start with React, then.
React has create-react app a command-line script which will
create a service worker for you, it will generate your web
manifest and give you a cache-first strategy for serving
your assets as well. It is as simple as this. I’m using yarn
in these examples but no reason why you can’t use MPM or your
own package management or manually downloading these as
well. Here, I’m stalling the; correct?-react app from the
global scope. I’m calling create-react app with the name
of the app. I run yarn build. It will output information for you
saying how you can run a development server, how you can
run a production server. Run the production server and jump in
and you’ll reach the Welcome to React page where they show you
in the code directly you can go to continue your adventure. If
you open DevTools as well, this is an ideal time to go and look
at the service worker that has been installed for you so you
can take a look in there and see the methodology that they are
using to create that cache-first approach so serving the
content. Next up is preact this –
Pre-act. This is fuller featured. They put out an app
shell. You get your static HTML with some content pre-rendered,
and then it will fetch the first route. Preact also provides
browser-list autopre-fixing. This means, by default, you will
get customised versions of the different browsers that are
coming in. Quite interestingly, this is one where you may want
to look at this regardless of if you’re using Preact. They have
a configuration that will interface with Firebase static
hosting because that is HTTP2 and can handle server
push so by default it will take advantage of HTTP push to push
some of those resources down, speeding up that delivery to the
user even more. Like I mentioned before, they use sw-precache in the background,
and if you want to understand what they’re doing, it may be
useful to look at the library too. Here, we are saying create
default, which is the name of the template. There are a number
of different tell me place you can explore if you want to —
you can templates you can explore. I get my feedback back
how to run the development server, and I jump in. By
default, they give you a bit of material design style app where
you’re actually seeing the routing component in action as
well. So, the navigation that you have at the top there will
enable you to navigate between different tabs. We’ve then got
the Polymer CLI for the Polymer framework. This gives you an
optional service worker you can drop in. Some approaches that
Polymer takes that are interesting, they give you three
different bundles. They give you an es5 bundle and an es6
bundle, and an un-Panelled es6 version.
As browsers are progressing, people are progressing with do
we need to bundle our JavaScript or can we leave it unbundled?
At the moment, you still want to bundle, but this is a good way
of getting that unbundled one so that you can benchmark between
the two of them to see what the performance difference is for
you. On the CLI gives you the PRPL pattern out of the box, so
there is a server that comes with the Polymer CLI that you
can use to serve your content using HTTP2 push, and
proactively pushing a number of resource s to the user. Again
where we’re pulling in Polymer.
This will take you through an interactive process. I’ve chosen
the starter kit here. I’m running build. We get another
nice material-design-related app with navigation and routing
built in. Again, and go take a look at the service worker that
is being run there. That will show you not just the caching
but also how it handles some of those incoming resources as well. ViewModel — vuejs. This
gives you see the app creation, the manifest, the service
worker, the app shell. One of the things that vue does is it
actually does some nice lazy-loading for the additional
JavaScript, CSS, Your Fonts, and is to on, and I recommend
looking at what they do this T so that you can pull in that
functionality as you need it rather than necessarily bundling
it up front and pushing it down to the user. That looks like
this. We’re adding in our vue CLI. This is another
interactive one so will guide you through the process and will
name your application so it is not truncated and so on. We go
in and build the yarn application, and we jump
straight into our view application vue application
again. As per usual, go take a look at the developer tools, and
there’s the service worker. Then, finally, we can take a
look at Angular. So, Angular CLI tool will give us the app
creation service worker. What they do with the service worker
is a little different. So rather than giving you the service
worker directly, they actually have a JSON-based configuration
file that is designed to handle the standard caching, there’s a
plug-in you can pull in for handling push notifications and
over life cycle events, so they’ve tried to provide this
abstraction on top of the life cycle, so that you can just
focus on the processes you’re trying to implement rather than
needing to understand the service worker itself. Inside of
here, I’m adding Angular again. create this. Inside of Angular,
you need to set configuration options saying I want a service
worker by default and I make the production build of my Angular
app, and we jump in, and we have a basically “hello world” app
with a link a to other that happens that can help get us
started. That’s my whirlwind tour through some front-end
frameworks. You’re all experts now, so go off and build the
next generation of PWAs. I want to highlight a few other
projects you can look at. PWA.rocks and builder are good
tools for taking you through the functionality choices you want
to make. PWA Builder gives you the radio selection that you can
go through, answer questions about the kind of functionality
you want, and it will start to spit out some basic templates
and starter kits for you. Best practices, then, to finish us
off. Always remember to go and check the application tab,
because I guarantee you will run into a couple of issues the
first time you’re using service worker. Rob Dodson has written a
brilliant guide. This will give you tips on things like adding
a kill switch to your service worker so you have an escape
hatch to wipe it, making sure you know the correct way to
reset this so you’re not clearing your entire cache and
cookies, maybe installing and reinstalling Chrome in the hope
that you can get a fresh version of your application. The piece
of advice I would like you to leave with here is try and
follow the tools. So, if you find yourself fighting with the
way the tool is trying to do something, then that’s probably
a sign that either you need to change the way you’re thinking,
you need to look at a different tool. So the tool should be
giving you 80 per cent of the way there, and it is that
remaining 20 per cent that you want to be focusing your effort
on, not fighting with the way the tool is doing the
generation. A couple of other caveats. Amazing as DevTools is,
please don’t assume that it is a be-all and end-all. There are
a couple of things that might not work. For example, the
offline checkbox you use there doesn’t affect every single type
of network connection. If you have WebSocket via the Firebase
real time database, that database might be going back and
forth even when you’re in offline mode. When it gets to
the end, make sure you test on a real device, preferably, the
same device that the majority of your users have. I also want
just to call back to the Pinterest stuff because they add
the end-to-end tests for the service worker as well which can
be incredibly helpful for isolating that logic and testing
it through. And then, really, the lasting is to stay up to
date and stay involved, because service worker is gaining a huge
amount of adoption, so Safari is going to be implementing it,
and so on. But it is in development, and
the spec is always being updated, so that means you want
to watch for those future developments, and if you have a
direction you would like the project to go, then this is the
time to get your voice heard. So please, go out and give those
tools a try, give feedback on how you found them, and I would
like to say thank you very much for your time. Any other
questions, come and find me, or poke me on Twitter, and please
enjoy the rest of the event. everyone, I’m a developer
advocate at Google. We announced in alpha the architecture
components, a set of libraries that allow you to design robust
and maintainable apps. Now the architecture components are
finally in 1.0 and ready to be integrated in your own
production applications. Today, I want to tell you about the
architecture component and I want to give you a set of best
practices just to make sure what you’re doing is on the right
track with the components. Also, I want to tell you a few things
about one of the newest editions: the paging library
which is still in alpha. Let’s start with the architecture components. We will
go over each of them. This is a different activity that
displays about the user. One of the biggest problem is
configuration change, because this is when the activity gets
destroyed and recreated. This is why in the architecture
components we’ve created the concept of Lifecycle and
LifecycleOwner. It has a Lifecycle therefore, the
LifecycleOwner. The life cycle of a life cycle owner can be
observed by an LifecycleObserver. You can
implement your own LifecycleObserver and define
members whenever specific life cycle events is Trigged. If you
want – triggered. You can annotate the method with Lifecycle events
and the ones you’re in, on start. We can create components
that are lifecycle aware. This is LiveData.
LiveData is actually a data holder. Other components can set
the value of the data being held and activities and
fragments can observe that live data. They can react on it, and
update the UI. But when the activity is on
pause, or destroyed, the subscriber is removed. It is
considered inactive so the events are not propagated. But,
when the activity is recreated, we subscribe again and the UI
changes on the live data. They try to score and finish, it is the view model. This is the
life cycle of a view model compared to the life cycle of an
activity. What’s important to know is that, when the activity
is finished, the view model is cleared. This means that the
ViewModel will survive configuration changes but not
survive pressing back, killing the activity, the application
from recents, or when the framework kills your app. This means when
long-running finishes. The data is observed
and can be observed or not. This means you will not no
exceptions when you’re trying to update a non-existing view. So,
you should avoid references in the ViewModel because these can
lady to memory leaks or crashes. So, instead of pushing data to
the UI, the UI observes the ViewModel. So make sure you
don’t hold any UI logic in the view, but rather if you move
this in the ViewModel so it can easily be unit-tested. For
example, it will be the ViewModel’s responsibility to
get the user, prepare it to be displayed, and then, if needed,
hold it for the UI. Then the UI would notify the ViewModel about
the user’s actions. The ViewModel works with the
repository to get and set the data. And repository
modules are responsible for handling data operations. They
provide a clean API to the rest of the application. They know
where to get the data from, what API calls to make, and when the
data is updated. So you can consider them as mediators
between different data sources. It is a good idea to have a data
layer in your application completely unaware of the presentation layer. Algorithms
that synchronise, databasing sink are the network are not
trivial. Adding a single entry that deals
with this is recommended. The repository
would know what call to, what the API call to use to get the
user. You want to make sure that we’re not doing more network
requests than needed, we would also save that in a database
locally. To save the data locally, the components comes up
with a new library. An object-mapping library that
provides data assistance with minimal boilerplate cull – Room.
Our user table will look something like this. We would
have a user ID, name, and some other user information. What we
want is in such a table, and every row is a user object. We define our user
objects and start with the entity. We define what are the
columns of this user, by using a column info, and which one is
or are primary keys. To actually access the database and work
with the data there, we use the data objects, so we create an
interface. We an know date it and define the methods that work
with our database. This can be query, insert, update, and
delete. Queries can also return live
data objects, making this query an observable query. What’s an
observable query? Let’s say our user table looks like this. We
have two users with id3 and four. What we’re interested in
is getting the user with id4. Our live data will omit John
because we want the name for it. When we update the table and
update the user with I D4 and instead of John set the user
name as Mark, our live data will automatically omit Mark, the
new user. Room also supports flowable if you’re working with
RX Java. Usually, developers that use it Rx Java use it
through the entire layers. So, one thing that you could do is use the LiveData on the UI
layer. It was made for the UI, so leverage that connection to
the activity life cycle. To help with this transition between
live data and RxJava you can use the LiveData reactive streams
cast. It allows you to create an RxJava – we have an
UI that reflects the changes in the database. The ViewModel
propagates the changes in the repository to the UI. Like this,
we have a high degree of testability and a separation of
concerns. What we are showing here is actually the guide to
app architecture. It is the way that we suggest that you could
architect your application so you can have all of these properties. Let’s talk about
saving the data. Let’s say you want to save the data that we
display on the UI. Where should this be saved in in the
ViewModel, or in the database? One thing that we should really
remember is the ViewModel survives configuration changes
but not pressing back, killing the app from recents, or when
the framework kills your app. So, when we’re creating an
activity, this is what happens.
The view model works with the repository. The repository gets
the user from the API, and then saves the user and the database,
and then the ViewModel creates this user UI model, a class that
we would use to display data on the UI. So, let’s go over a few
scenarios. The first one is configuration change. Let’s see
what happens. In configuration change, the on Stop method is
being called. We don’t need to do any calls to the repository,
or to the network. Scenario two: the app goes to background and
the user navigates back to the application. Is to, when the
activity goes to background, then the onSave – we can display the
user again from the ViewModel. Again, we don’t need to call the
repository, we don’t need to do any network requests.
But scenario three, the most interesting one: when the app
goes to background and the process is killed. In this
case, when the activity goes to background, unsaved instant
state is called – onSaved Instance State is called. This
is where we can save the user ID of our user, and then the
activity gets killed, and then the activity starts again in the onCreate in the
bundle. Based on this user ID, we can pass that to the
ViewModel which gets the user from the repository based on
that user ID. So this means that we don’t need to do any network
requests. If we save the user ID in unsaved instant state. So,
in the end, what should we put in each of them?
What should we put in unsaved instant South in the view model
or in the database? Keep in mind that, in the database, you
should put the data that survives process death. This is
where you should put your user object, the big one that you
have. Then in the ViewModel put the data needed for UI. For
example, that the UI model that is displayed on the screen. You
should put the the minimum amount of data that allows to
you restore the data. For example, a list of IDs. But
instead of one user, let’s consider that we have a list of
users. Many applications need to load a lot of information from
the database, but database queries can take a long time to
run and use a lot of memory, and we have a new library, the
paging one, that makes it easy to load the information
gradually. For now, the architecture components is still
in alpha, and what I’m going to tell you is not live yet, it is
only going to be released in the next version of the alpha,
the paging library. So, the app starts. The view model for the
screen is created. The views of the notifications of the data.
The view model works with the repository, and it’s subscribing
to the data coming from it, and then the repository works with
the data source. Nothing in it so far. So we have the data
coming from the data source to the repository, then to the
ViewModel that changes the data preferring it to be displayed on
the UI, and then the view model, gives that data to the
UI. Let’s say that in the UI, we have a recycled view, and that Recyclerview would go with the
adart, and only then the adart will notify the UI and display
the list on the screen. So this means that the advantage of this
architecture is the data force is separated from the UI, and
then the UI just observes the data, and we can have one data
source, and Recycler Views. We can use the same UI logic, the
same data source logic in different UIs. Then the paging
library will be used throughout this entire architecture. The
objects that all of these classes have in common is a page
list. And then instead of a recycler view adaptor, you would
use a PagedListadart. So let’s see
what is going on there. We have these main components. What is a
page list? It’s a lazy-loading list that pages content from a
data source in chunks. It supports both infinite scrolling
lists but also countable lists. Creating a paging list
automatically triggers loading of the data from the data
source. And this is why it should be done on the
background. This is why the paging library is using live
data, because live data ensures that whatever operation is done
to request the data from the data source is done on the
background. Then, once the data is constructed, then it can be
passed to the UI on the UI thread. Lets say as a data
source, we have now a database. But the data source gets the
data from the page list from the network or the database. Then
the data is put in the page list, and then, the page list
adart works with the page list and
update the UI. What happens when the user scrolls? Well, at
this moment, the page list requests the next page from the
data source. The data source gives the next page. And then
the page adart updates the UI.
But let’s say that the user swipes away an item. So this
means that the corresponding item in the database gets to be
removed. So the database is updated, the item is removed.
But because the source was updated, the data source is
invalidated, and this leads to the destruction of the source,
and the Rio de Janeiro creation of the data source with the
beige list. So what is important to remember is that several
types of operations can read to an invalidation of the data
source, so, if you’re doing an insert, an update, or a delete
in a data source, then that should trigger an invalidation
of the data source, and then to the recreation of the page list.
And then this new page list will be populated with the data from the data source. Then the
content of the two pages lists are compared on a background
thread, and then it will tell,the DiffUtil will tell the
app what has changed. Only then it collapses the missing row.
You can access Firebase files, network, whatever you want, but
you need to define how that data is requested. So, for this, we
have two different types of data sources. The first one is
the KeyedDataSource you would use this one if you need the
elements. Because of the key from the element – because based
on the key from element n -one, you’ll be able to get the next
one. For example, when you’re working with the data, based on
a certain criteria, so, when you need to get the users by name ordered. The when you’re implementing your
KeyedDataSource and how to load the current item or before if
the scrolling-up is allowed. All of this will be based on a key. This key is something you would
define in our case, for the user. This key would be the name
of the user because this is how we are ordering our list.
Another type of data source is the positional data source. So,
if you’re using a fixed item count, just based on the system
information, you would use this positional data source. When you’re
implementing a data source, then a low range, and this would be
the point where you would be requesting some data from your
back end. So, when the data is received, you would inform the
data source via the load callbacks. So the next step is
to implement the data source factory that can create the data
source. This is behind because, when the data source is
invalidated, the factory is actually the one that those how
to recreate the data source. If your data source is
room, return to the data source factory where the key for this
for now is always – under the hood, in the implementation of
the Dao, Room will create the data source factory for you. But
how do you know when to trigger these data requests? How do
you know when the user scrolled that then is the time to get the
data from the network? For this, you would use the page
list boundary callback. In the callback implementation, this
callback signals when a page list has reached the end of an
available data, and this is actually the point where you
trigger more requests from the database, so whether you should
request items from the front or from the end of the list. And
then, actually to get the live data of a page list that ends up
being used by the UI, we need to create an instance of the
live page list builder. So this gets the parameter, let’s say
the network data source factory, and then a page size, so how
many items should be in a page. Then you can set the boundary
callback. I was showing earlier this
slide, and I was saying the detail compares the detail of
the two-page list. Paging library wants to optimise this,
and instead of comparing two lists, it knows how to compare
elements. Well, it knows if you tell it how to compare elements.
More precisely, you will need to implement a Diff Callback. Are contents the
same and are items the same? Then, in your adart,
you would extend the page-list adart of your model and your
view holder. You would use as a Prime Ministerer in the
constructor the default callback that you have just created. And
then you just bind your view holder to the – in your
activity, you need to make sure you have the activity, so it has
a life cycle, and you create an adart, and then, you would
observe the changes of the live data of a page list from your
view model. And then, whenever that live data emits, you would
set the new list to the adart.
Now we have a list. The architect components have a high
stability, also because of all the feedback that we’ve received
from it. Check out the paging library, and give us feedback.
Tell us what works, what doesn’t. Tells where you would
need more things to be done by the library instead of being you
who has to implement it. We have a lot of new
components – life life, Lifecycle, Pagedlist. You
can pick and choose the ones you want for your application,
or you can use them all together.
Check it out and let me know how it went. Thank you. [Applause]. Is ERIC: Welcome, everybody to
modern tooling, testing and organisation. We are going to
talk about Lighthouse, Puppeteer. Let’s go ahead and
get started. My name is Eric Bidelman. I work with web
developers all over the world and help you guys build amazing
web experiences. Fun fact about me is I’m from a state in the US
called Michigan, and I can point to my hand in tell people
where I’m from which is convenient.
The other cool thing about Michigan is it has the most
number of light houses than any state in the United States. It
was only fitting that I worked on a project called Lighthouse,
eventually. VINAMRATA: I’m a product manager
on the team working on Lighthouse. As a fun fact about
myself, I actually really loved Bollywood dancing as a kid, and
these are some embarrassing photos of me performing at
family functions. I recently realised is Bollywood dancing is
similar to my day job as a product manager – specifically
when I’m doing Bollywood dancing I’m trying to tell a story
through physical movements about what I’m feeling in the music,
and in product management, I’m telling a story to my users
about my products. I want to take you on a journey into the
Lighthouse product itself, more on the PM style and less of the
Bollywood dancing style! I want to focus on Lighthouse, talk
about the problem we are trying to solve, deep-dive into the
problems itself. And then Eric will take over and talk about
headless Chrome and Puppeteer. Let’s get started. As a web
developer, you might have heard about at lot of things that
you’re supposed to do. For example, you might have heard
not to use methods like document be write, that you’re supposed
to optimise your images by compressing them to perform web
experience; that you’re supposed to be on HTPPS to deliver a
secure experience to your end users; that accessibility is
importance and should have labels on your page, and you
should not use render-blocking scripts and so much more stuff.
To add to this, you might have heard about a thing called
Progressive Web Apps throughout this conference that help you
create mobile web experiences that feel like native app
experiences. In order to build a Progressive Web App, there are
a lot of things you need to do, including your server, creating
– so you probably feel like this person. Really confused. Like
what am I supposed to be doing? All you really want to be doing
is spending time pushing code and creaturing features for your
end users, be more like the person to the right, I guess.
That is a problem that we completely understand on
Lighthouse, because we want to enable all of you in the
audience out there to create really awesome mobile web
experiences. So, so that’s why we build Lighthouse which is
basically a product that helps you understand your website
against four different categories of performance,
accessibility, Progressive Web Apps, and developer best
practices, and running checks against them to create a
personalised report that helps you understand what are the
things that you’re doing well on your website, and what are the
things that you could perhaps do lately bit better? So, this is
a little bit hard to understand on the abstract level, so let’s
go into a live demo. Switch to the demo, please! I’m going to run the
site live on Air Horne. You click this button and it
gives you the sound of an air horn. So pretty awesome! I’m
going to run Lighthouse through the Chrome developer tools, but
I will always talk about other ways you can run Lighthouse
later. I’m going to open up the Chrome developer tools here, and
I’m going to go into the audit panel, and voila, I have it
right there. I can check whatever categories of audit I’m
interested in. I care about everything, so I’m going to run
it across everything. And so now, what Lighthouse is actually
doing is that it’s emulating my website on a mobile device, specifically a Nexus5X device
and it’s throttling the network, so it is simulating a 3G
connection. That’s why it takes a little bit of time, but now
you can see right here, I have my Lighthouse report, and things
look pretty good on my website, so that’s pretty awesome. Can
we switch back to the slides, please? There are quite a few
ways that you can run Lighthouse. The way I showed you
was through the Chrome developer tools. You can also
run it through the Lighthouse Chrome extension, through the
command line with the Lighthouse MPM module and run Lighthouse
on web-page tests. Let’s do a little bit of a deep dive into
the Lighthouse report I was showing you earlier, and, for
the purpose of of this part of the talk, I will only be
focusing on the performance and PWA section. So let’s get
started with the PWO section. This was a slide earlier talking
to you about what are all the different requirements that you
need to implement in order to build a Progressive Web App? We
have a handy-dandy checklist to check what are the things you
need to do to build a PWA. Lighthouse automates the entire
process, so you don’t have to think about am I actually
implementing a service worker correctly? Lighthouse can tell
and do that for you, and it tells you a score at the top
which helps you understand at a high level how far are you in
the journey of building your own Progressive Web App? For the
Lighthouse definition, we consider a score over 100
meaning that you built a Progressive Web App. So now,
talking about the performance section of the report, before we
deep-dive into the report itself, I want to explain to you
how we think about the page load. So, on the Chrome team,
you might have heard us talk about a thing called programme
letsive web metrics. The way you understand a page load is how
the end-user perceives a page loading. There are key moments
here in that user journey. The first paint, the first content
of pixels appearing on the screen. This could be a text, an
image, an SVG. The next thing is first meaningful paint. When
did the first meaningful content on my page disappearing? This
would be a hero image, for example. Then, finally, at the
end, you have Time to interactive. So users can click
on anything and they can seat page view responsive. So, what
Lighthouse does is basically takes all these metrics, shows
them to you for your web page, helps you understand at a high
level how good these metrics are and tells you what are ways you
can improve these metrics. This looks kind of daunting so I
will go through it step by step with you. At the very top, you
have a high-level score, in this case 63, that tells you this is
how the overall performance of your website is. The next
section is what we call metrics. Basically, at the top, you will
see a bunch of different images, what we call a film
strip which is basically how did your page look at different
time intervals of the website loading? And then, it also
gives you values for the metrics that I was talking about, like
first meaningful paint, and time to interactive.
The key thing to note here is that this section determines
your Lighthouse score, meaning that metrics like first
meaningful paint, first interactive and consistently
interactive are worth five times as much in terms of determining
your final Lighthouse score, so, when you’re thinking about
what metrics should I be paying attention to, and how do I think
about improving my Lighthouse score, those are the top three
metrics that you should be looking at. So the next section
of the report is the opportunity section, with the idea being
how are ways that you can improve your website? In this
case, you can see that optimising your images are the
best bet in terms of improving the performance of your website.
Finally, we have diagnostics. If you’re interested in
deep-diving into performance further, you can look into
things like critical request sheets. So I would like to give
to all of you a sneak peek into what is is coming up next for
Lighthouse. I’m excited to announce in the next few
releases, we will be adding a whole new section of the report
itself, specifically about how you can make is your website
more friendly to search engine callers and indexers, but it is
coming soon, so it is not out yet. So now when I take a step
back and talk about Lighthouse and the broader context of web
development. Specifically, I want to start off talking about
our adoption metrics. So, light light has — Lighthouse has a
good ahoping in that we have 100,000 using the extension. 250
now using the NPM module. In terms of DevTools, I can’t share
exact stats. It is as half as popular as the timeline panel.
We’ve seen people building services on top of Lighthouse,
specifically services like Calibre and Treo.
We are an open-source project and we definitely wouldn’t be
here without open-source contributions we’ve received
from developers like you. Even something as simple as changing
the readme file helps the project go a long way. We have a
100-plus contributors request. We have countries like India,
and Brazil, and Poland, and even the United States next door
neighbour, Canada. If you’re interested in being part of the
contributor community, check you to out on github. And then,
finally, I want to thank you – I want to thank everyone being
here on behalf of the team. I really appreciate you coming
out in listening to me talk about Lighthouse, and, if you’re
interested in trying out Lighthouse, we have a booth at
the Sandbox. Come and talk to us after the talk if you want to
learn more. Now that you have a website and you have a nice
little way to audit it via Lighthouse, you might want to
think about what is the best way that I can detect regressions
and automate some testing in order to make sure the
regressions don’t go to my end users? This is where Eric will
tell you about Puppeteer to tell you about how to make that
happen. ERIC: Thank you. Can we switch
to Keynote, please? Onwards to Puppeteer. Sorry, okay. It
is not on my screen, but it is on yours. Let’s get started. We
talked about manual testing using Lighthouse which is in the
DevTools now. Maybe you want to do some testing and automation
using Headless Chrome and Puppeteer. We will talk about
both of these right now. My clicker doesn’t work,
either! All kinds of technical fails!
All right, so what is headless browsing? How many people have
heard of Headless Chrome? Nice. Normally, when you click the
icon on your desktop, right, you launch Chrome, there’s this
nice window, a page you can interact with, the url bar, the
DevTools that you can poke around in. In headless browsing,
there’s none of that. That’s no UI or URL bar, or address bar
to interact with. There’s no chrome-to-chrome. So, using
headless chrome, you are deciding the future blast is
going on. You control it using the scripts that you write. To
launch Chrome in headless mode, you provide one flag on the
command line. It is –headless. You’re not going to see a
window. What do you do with it? The important thing is to add
this other flag called the remote debugging port flag. This
is where the magic happens. What this does is it will launch
Headless Chrome but enable the remote debugging protocol, the
same API and protocol used by the DevTools itself when you’re
inspecting node or your applications. The same stuff we
can tap into using this command line flag. By doing this, we can
write an application in node js to control and automate
Headless Chrome. If you want to know more about
Headless Chrome, I’m not going to talk about it too much today,
but there’s a lot of cool stuff you can do from the demand
line. You can take screen shots, you can generate pdfs. Check
out this article. There’s some interesting things you can do,
but the more interesting things is when you write programmes
that control Headless Chrome.
One way you can use Headless Chrome programmatically is to
use this amazing little module called Chrome Watcher. Actually
launching Headless Chrome and dealing with Chrome on different
platforms and systems, finding Chrome, launching the right
version is complex. We abstracted that, created an MPM
module for you guys to use. It is really easy to launch Chrome
with this module. You can pass that remote to bugging port
flag. I’m saying watch in headless mode, and one line of
code, you can interact with Chrome in your node.js
programme. This is where Puppeteer comes into the mix. It
is a library for working with Headless Chrome. There’s a lot
of automation and testing. You might have heard of Phantom and
Selenium. We’re not trying to reinvent the wheel but create an
easy out-the-box for Headless Chrome. We want that to be easy
especially in the case of Chrome. The Chrome team said
let’s build a node.js modern library and take advantage of
the new ES6 features. We’re using promises all over the
place. The other reason for that is it’s the way the
architecture of Chrome works. We’re writing a node programme,
sending asynchronous messages which in turn automates and does
things with Chrome. All that message-passing is synchronous.
Promises lend themselves nicely to that. Of course, async/await
makes the code a lot cleaner. You can use Puppeteer and older
versions of node. No problem there. The other thing we want
to do is bubble Chrome with the library, so one of the hard
things to do is actually install Chrome on different platforms,
make sure all the dependencies are installed. So, when you get
pup puppet from MPM we actually download an open source of
Chromium, so you don’t have to worry about configuration or
anything like that. It just works. You focus on your work.
We want it to be a representation for the DevTools
protocol. The protocol itself there’s so much you can do. It
is a complex awesome API surface. We wanted to actually
create the highest level API possible. Really wrap the API,
the protocol API in the most useful things we could, we have
API calls for the most common-use cases that you would
use. Where does Puppeteer fit in our overall testing narrative? I present to you the pyramid of
Puppeteer. So, at the bottom, we have the browser, we have headless Chrome. And so, I want
to remind be with you bottom of this layer, we have the browser.
Your new ES6 features, JavaScript features, the new web
platform features. The fact that we can use an automated
test be library and use things like service worker and pushing
notifications and some of these web platform features is
exciting. We haven’t had that in the past in some of these other
frameworks. On top of that, you have the Chrome developer
remote API. Again, very, very complex, big API. But that’s a
thing that’s going to interact with Chrome itself. We’re not
going to interact with that directly.
That’s where Puppeteer comes in. This small shin that sits on top
of this lower-level stuff and at the very top is where your
node scripts come in, that interact with the Puppeteer API
and they control Chrome. That’s how everything fits together. To
show you the difference between using the DevTools protocol by
itself and using the Puppeteer, two examples, navigate two a
page and print the HTML content of that page. You have to
understand the details of the code on the left. Just know that
it’s of more robust, right? There is more stuff that’s going
on. I need two libraries to launch Chrome and control the
protocol. I have to do a lot of set-up and clean-up. I have to
enable things and disable things. The right example of
Puppeteer, it is really clear what’s going on, right? You
launch a browser, create a new page, you navigate to
example.com and you print the content of that page. Puppeteer
makes a lot of these things very easy to do in just a few lines
of code. What can you do? The first thing a lot of people do
is take screen shots of their page. You can do that with
Puppeteer’s APIs. Of course, the first thing you need to do, go
grab Puppeteer off MPM. Just install it locally. It will
bring down a version of Chrome, and you can require it inside of
your node application. First things first, if you want to
write a script that uses put puppet the first thing you
probably want to do is launch Chrome. Puppeteer has a launch
method and by default that will launch a headless verse of
Chrome. Again, everything is a promise, so this is going to
resolve and give a browser instance to interact with and
control. Giving a browser instance, the next thing is we
want to create a page. Using async/await to clean up the
promises a little bit. We will use the new browser page to
create that. We use page.goto to navigate to example.com. Then,
finally, we’re going to take a screen shot of the page itself.
So Puppeteer has an API for that, page.screenshot. It is
kind of nice. It has got this path property that you can set.
You don’t have to read a stream or read a buffer or anything
like that. You give it the file name that you want to create,
and, boom, you have your screen shot on disk. The last but not
least, you want to probably close the browser, we are done
with it, we don’t need to do any more scripting. We will close
out Chrome. Five lines of code to take a screen shot of your
web app, pretty cool. What are the other things you can do with
Puppeteer? Screen shots are one. Pdf is another. Headless
Chrome has the ability to print to pdf and you can use
Puppeteer’s APIs as well, similar to the screen shot. We
can navigate to Google.com, emulate a media device, so we
don’t get the print-style sheet and save the pdf to disk. Pretty simple. Another thing we
can emulate a device. Maybe you want to test the responsiveness
of your device or application. This example here uses some of
the built-in pre-defined advice descriptors we have. You don’t
have to worry about the display port settings or display
settings, and this example here, I’m emulating an iPhone six
device and navigating to Google.com. It is the mobile
version of Google.come. Dev —
DevTools is the programmatic way to do a lot of these things.
One of the neater things that you can actually do is inject
code into the page, right? Maybe you want to test
functionality of the page or make sure that JavaScript is
operating like you expected to. Over here, we will navigate to
just my Twitter feed. We will find the first tweet on the
page. We will programmatically click that element that will
bring up this overlay. That’s what Twitter does when you click
the first element, and take a screen shot of that DOM element,
take a screen shot of a page, a full page, or a DOM element –
the choice is yours. I’m going to run some code on the page and
the first thing I will do is call launch. Launch a new
version, instance of Headless Chrome. Create a new page to
work with. Navigate to my Twitter stream. Next what we
will do is use page.$eval. You give it a css
selector that will find that node on the page. The cool thing
that the call back is injected inside the browser very similar
to typing this in the console. So we will click that using
anchor click which will open that overlay. The next function
call is waiting for that select to be available.
Puppet. The has a method that says make sure this element is
visible before I move on. When this promise – finally, we will
take a screen shot of that DOM element. So we will grab that
handle to that element and take a screen shot. All in all, it
looks something like this. We will open the browser, navigate
to Twitter, I will find the first tweet using Puppeteer. It
find that element. Eventually, what you get is the final
product which is the screen shot, so this is my new puppy.
Say high to Chewy. He likes Star Wars! Take screen shots of
full pages, and you can take DOM elements, screen shots of any
portion of the page you want. What we saw was really, really
powerful. In a couple of lines of code, we wrote some Puppeteer
API code, and you can ramp that in your favourite testing
framework, and all of a sudden, you have an instant smoke test,
right. Insert your favourite testing harness, no matter what
you want to use, and you have got integration smoke test,
testing the functionality of Twitter in this case, easy to do
doing Puppeteer. Another thing that you can do and is very,
very powerful is intercept request before the browser
issues those requests. We can do that using the set Request
Interception method. Every time the browser makes a network
request, we will intercept that request and decide what to do
with it. This example here will navigate to youtube.com if it is
an image, it will abort that request. If not, we let that
pass through as northerly. The end result, if you run this
peats of code in node, exactly what you would expect, so the
images start to to – what this is great for be you can does a
layer of my site work if images don’t load? Is my accessibility
and layout okay? You can do this for other resources like
CSS, maybe serve up one style sheet using network requests and
reception. Decide what to do based on the request type. Early
common thing to do in automation is to do automation
form submission. Does my form actually work? So Puppeteer has
high will be definitely APIs for this for typing in form
inputs and clicking things on the page.
Go to Google.com, right? It will input the text Puppeteer
into that search box, just by selecting it, using its CSS
selector, and we will call page.click and call the Google
using the click. We select the wait-for selection method. It’s
an anchor tag wrapped in an H3. We wait for those results to be
ready using the same method. Finally, we will use use $$eval to print the
titles to the console. In node, you get exactly what you would
expect, a list of search results for the word “Puppeteer”. We
showed interaction with the keyboard, form submission, and
actually just scraping content from the website using
Puppeteer’s APIs.
There’s something new to the DevTools is that it has this new
panel called “performance monitor” that has a slew of
performance information inside of it. A lot is being surfaced
in Lighthouse now. You can get access to that inside the
Puppeteer. It is a simple API call. Page.metrics will give
enthusiastic information. It corresponds to the panel. All
the information like howling do your scripts take? How long
does it take to recalculate styles in this app? More and
more stuff is being add ed to this all the time bit DevTools
team. This will get rich as we go on. This is great, you want
to track performance over time for your application, maybe in a
CI environment. Really useful stuff. There’s a tonne of stuff
you can do with Puppeteer. I can’t cover all of it today. If
you use a service worker, PWA, you want to test your site to
work offline, you can turn JavaScript off. You can test
with the network connection off and see if your site does indeed
work offline. Using Puppeteer would be we can intercept
network and console requests that any time the browser or the
site logs something to the console, we can intercept that
and print that and do something with that information. If we
don’t have a dud device descriptor for with you be you can use page.setViewport. So, before I
leave you, a couple more pro tips that I run to.
I spoke to a lot of developers starting at Puppeteer. I want to
mention a few pro tips for debug. I think they’re useful.
Let’s talk about this launch method. Again, it launches
Headless Chrome and you get a browser instance to interact
with. A couple of interesting things, right. If you’re writing
a script and you can’t see what is going on, it is not so
useful. Maybe you’re debugging a script. Just throw in Headless
Chrome. You can launch Chrome, see Puppeteer launch the script,
navigate, and click around. That’s really useful just for
debugging too so I highly recommend throwing the headful
mode of Chrome on. You can also open the DevTools if you want
using this flag DevTools equals true. That will open the
DevTools at the same time. Kind of useful. You can see DevTools,
poke around as Puppeteer is automating your page. One day, it might work!
Debugging actions: another couple of interesting things you
can do is set slomo, and slomo is a flag that allows you to
slow down all operations that Puppeteer does by a certain
number of milliseconds. If someone is typing in a website,
they don’t type as fast as the computer.
You can slow this operation down using slomo which will simulate
a real user to see what a real user would do on your site. It
slows things down like navigation so you can see things
happening as Puppeteer is going through. Dump IO is useful if
your page is doing something weird, and the browser is
crashing. Turning this flag on is useful for that information.
Last but not least, if you don’t want to install anything today,
you can hack – I hacked together this site over the
weekend. You can go there, play around with Puppeteer’s API, run
our demos, tweak code, see the results at the top, the console
information, and also any pdf s or screen shots you generate. It
is easy to get started and tinker around. No guarantee of
up time because it is a hack project. One thing that you can
do which is cool with this is use Puppeteer locally on the
machine to launch that site which runs Headless Chrome and
Puppeteer in the cloud, so we’re using Puppeteer to control
Puppeteer which is like this crazy inception moment. To show
that is possible, I built that little script. What we are going
to do is run puppet. The on my, launch that site, Puppeteer,
inject code, so you can see I’ve injected code that opens
that page itself and takes a screen shot of the final
product. Hopefully, we will see a picture
within a picture here. My house at the top is not going to
move. We’re using node. Puppeteer will
click that run button, get a final product and eventually,
what happens, is you get a screen shot within a screen shot
which is kind of cool. Before I leave you, just
a couple of things to mention. The first thing is make sure if
you’re using one of these testing frameworks out there,
make sure they’re using the headless version of the browser.
A lot of browsers have a proper headless mode, and some of
these APIs have been around for a while.
They may not be using this mode. It will save you memory, time,
and a lot faster to test in headless mode. A lot of people
use libraries like JSDOM for testing code and using code to
uses DOM. If we have a headless browser, we can test in a real
implementation, maybe you don’t need a library like JSDOM any
more. Consider that. Also testing other browsers, not just
Chrome, obviously, Firefox has a headless mode that they
launched and other browsers are also implementing headless mode.
Test across all browsers. It is very important. We throw a lot
of stuff your way. Here are the open-source projects and
resource that is we talked about today at Lighthouse. DevTools protocol
if you really want to know what it can do – awesome stuff. The
Headless Chrome and Chrome Launcher module. I think with
that, we’re all done, and we really appreciate you guys
sticking around. We know it is late in the day. Thanks everyone
on the live stream for attending. I’m Eric Bidelman.
SAM: I’m thanks all for coming. [Applause].
Welcome to the last session of the day. We’re almost at the after party. My name is Nick
Fortescue. JOHANNES: I’m working with Nick
on Google Play. NICK: I’ve been working on
Google play since it was destroyed Market, so quite a
biochemical now. We are here to talk to you about why quality matters in the app. We want
every Android user, when they open an app installed from Play,
to open an amazing, beautiful, wonderful app. Of course, we
don’t write the apps, you guys do, so we need you to help us.
Our aim today is to persuade you of how important quality is,
not just for all the billions of Android users out there, but
also for the, for you as a business. So we’re going to talk
about in detail. I’m going to talk about why the quality
matters. Johannes is going to talk about what you can do to
improve the technical excellence of the app and how he can help
you. Then more beyond technical excellence. First, why is
quality important? We’re and another thing and we will give
you hard numbers in this. You’ve seen the slide before in the
keynote. We in Google Play in London took some apps and measured their quality by a
number of factors that Johannes will tell you about later. We
split them into categories: excellent apps, average apps,
bad apps. We looked as those categories. When we went from an
average app to a good app, those apps were earning six
times as much income. Imagine how your company would be doing
if it had six times income from your app, a big improvement. It
got seven times as much attention. That means users came
back over and over again, far more for higher quality apps
than low-quality apps. Hopefully, I’ve grabbed you that
this is important. It is not just me but our partners who are
describing this. Here’s one from Zolte scroll, —
here’s one from Zalando. They decided to focus on quality.
They thought we’re going to focus on reliability, and we’re
going to focus on performance. They got their start-up time 30
per cent faster. They also reduced crashes by 90 per cent.
That gave them real money. It is interesting, you say six per
cent monthly install increase. That doesn’t sound too much.
You’ve got to remember this install increase isn’t coming
from the users who are already using the app. They got that
installing increase by improving the app itself which means
better ratings were happening, more people were recommending it
to their friends, and that gave a 15 per cent increase in
revenue in lifetime value of each user, and that is really
money for the bottom line, so spending that time going, “Let’s
focus on performance and quality for a bit rather than
just adding new feature after new feature.” It gives real
benefit. This pays off across. We looked
across the ecosystem, and apps with a high crash rate have 30
per cent more uninstalls on the very first day than apps with a
low crash rate. Unsurprising, you might think, but that’s the
first day. That carries on down the tail of the apps as time
goes on. And you might know if you hopefully look at your app
reviews, you hopefully know that if you look at the one-star
reviews, we ran some text analysis on it, and we found
that 50 per cent of those one-star reviews mentioned
crashes, stability, bugs, this sort of thing. Whereas the
five-star reviews, they’re mentioning over half the time,
60 per cent of the time, mentioning speed, mentioning
smooth design, they’re mentioning usability, so you
really want that quality increase happening. Busuu.
I don’t know how many have used the language app? It’s a great
language app. It was a great review score, getting 4.1 stars
on the Play store. Lots of developers would kill for a 4.1
rating. They decided to focus on performance improvement and got
it from 4.1 up to 4.5. Those of you who are app developers in
Play will know how hard getting from 4.1 to 4.5 stars is. So
maybe consider focusing on performing. The other thing is
we said it’s our top priority to have the healthy excellent apps
in the ecosystems. We’re going to do that to help users find
the best apps. For example, our ranking algorithms on the Play
Store they look at signals is an app smooth, performant, does it
crash? They decide how to rank the app. We do collections
promoting good apps. We give awards, and these awards drive a
lot of organic traffic. If you want your app featured, if you
want all the publicity that comes from a Google Play award,
you need to meet these performance metrics, and then we
will all have great apps in the Android ecosystem. You might be
wondering how do you do that? Is it some big secret? Johannes
is going to tell you the measures we use to look at app quality. You can see some
apps which do really well in this already. If you search for
Android excellence awards, will you find it. We released it in
IO in April, and we’ve updated it twice. It was last updated in
October. We’re planning on updating it quarterly. You can
see apps in that that are already excellent. Maybe go and
have a look at that on your phone. Try some of the apps and
you will see what a smooth excellent usability experience
is. It’s engaging, it is fast, it has got great design. So
Johannes, tell the developers how they can make their app
great. JOHANNES: Thanks a lot. All
right, so we have heard a lot about app quality, and why it
actually matters. Let’s now take a look at how you can actually achieve excellence
with the tools we provide in the console.
We look at Android Vitals. This is a programme supported by
Google that really helps you understand and improve the app
quality of your apps. Essentially, it gives you
signals around the app health of your data which is directly
connected to the performance of the app. So Google IO earlier
this year, we launched the Android vitals in the Play
Console. This tool allows to you see aggregated performance data
of your app. This data is automatically collected by
millions of devices which means it’s a huge source for data. And
the good thing is, it all comes for free, so there’s no need to
add another SDK. You can go into the Play Console, look at
the vitals right now, and you will see data. Another cool
thing is that this data isn’t for engineers only any more but
everybody in the company can now look at the data and understand
the data, annual yikes the data, and get your eyes around
it. If you would like to reduce the track rate, you can track
afterwards, so it’s measurably. Let’s take a look how it works.
There are three main pillars for it. First of all, you have the
metrics, we have the tools, and afterwards, the rewards. So
let’s go through it. First of all, the metrics. It’s there
where we provide bad behaviours, and those are essentially
patterns or events that have a direct impact, a direct negative
impact on the user experience which is then, for example, if
your app is very crashy, or if it consumes lots of battery. We
provide you with a set of tools to help you improve the quality
of the app. We do this throughout the entire life cycle
of the app, for example, before release, you use the pre-launch
report in order to track crashes and security
vulnerabilities even before you launch. Then at release, you can
see the release therefore and measure the effectiveness of
your current release. And after release, you see the Android
vitals to help you understand the technical performs. Last but not least,
we have the rewards as Nick mentioned. This is there really
to help you and really to celebrate those good behaviours
that we would like to see and Play. — as Play. We currently
report on three major performance areas in the Android
vitals which is stability, battery, and rendering, and, for
each of those, we generate bad behaviours. You see those
metrics. For stability, we look at the ANR rate and and the
crash rate. For battery, we report if your app experiences
wake locks or wake-ups. For rendering, we report on slow
rendering or frozen frames. We generate those bad behaviours
around these metrics, and, if something is wrong with your
app, we directly notify you, so you’re aware of it, and you can
fix them. Right, so since IO, we have a massive momentum. More
than 65 per cent of the top 1,000 apps in Play have adopted
this feature, and made use of it. We work — we work closely
with 50 partners and they manage to release the crash rate from
3.5 to 1.7 which is huge. So just to give you a sense for the
scale, over the same time, these apps went more than 1.5
billion times which results that
millions of users worldwide have a measurably better experience,
of therefore deliver a better rating. Right, so, for many
partners, vital assist now abactive part of the planning,
are, like the weather channel, for example. By focusing on app
quality, he had absorb a direct impact on the customer
satisfaction. Again, all this relates and results in a better
and higher rating of your app. All right, so, inspired by this
momentum, we are continuing to scale this programme further and
further, and investing in it to make it more meaningful. We
have three main things to announce.
We have an expanded device coverage. We also report on five
new bad behaviours, so more metrics for you guys, and we
overall improved the user experience. And, yes, it will
help root cause and analyse the issues that you experience – if
you experience some. Right, so in order to expand the device
coverage, we work with several OEMs to surface the performance
metrics and more data for you guys. Initially, we’ve report the on Negus devices ᆪPixel
devices, and after talking to many OEMs, we now have the
spectrum across high-end flagships, mid-range best
sellers, and low-end devices, so this will give you an overview
of the ecosystem by Play. We track the programme by 25 times.
More data for you. We added two more metrics for the stability
which is the multi-ANR rate and the multi-crash rate. Especially
those repeating crash rates caused big user frustration
which then eventually leads to higher uninstall rate and a
negative rating because the app seems hopelessly broken. By
looking at the data, you can now track those issues earlier,
and, yes, fix them right away to prevent user churn. For
battery, we now introduce three new metrics which is the
excessive Wi-Fi scans in the background, the excessive
network usage in the background, and the stuck background wake
location, especially those – wake locks.
Those activities consume unnecessary battery from your
users, and all this results in frustration of the user. Then we
significantly revamp the Play Console user experience for
Android vital s to help understand and improve the app
quality. With this new overview page, we monitor all the
different metrics that we provide you to judge your app
quality. Straight up the top of the page, we now highlight any
bad behaviour that was triggered in your app for any active APK. If you uploaded to iPhone, beta,
or if it is in production, we give you those signals right
away. You can automatically immediately focus on the most
important actions. Also, something that we now introduce
is that you can track the performance of each metrics over
time after you drill down into the details, and you can see it
in if comparison to the bad behaviour threshold, so you can
see if you’re crossing that line, you would then get the
notification. So, essentially, you can measure the
effectiveness of the actions taken over time. We also are
happy to root-cause the origin of these bad behaviours I by
allowing you to filter the data, slice the data to see different
options, for example, if you slice the data by APK version,
by OS version or over time, you can really start to see where
the root cause of that issue is. And last but not least, we do
provide you benchmarks now, so to help you establish a
reference point, and also to give you some sense how the
performance of your entire ecosystem is. And then, wherever
possible, we give you guidance on how to resolve any issues. In
this case, when debugging and A & R, you receive guidance,
actually on what caused the application to freeze. And we
identify several different cases, for example, dead locks,
network IOs, and many more. Go and check it out. It is live.
Use it right away. Thanks to all of this, partners are using
Android Vitals in the Play Console, continuing to invest in
app quality, and so do we. We love to hear from you guys how
you’re using it, and also how we can improve this product. What
was useful for you, what can we do better? We’re over there in
the Sandbox on the other side. Come and visit us, share your
thoughts, and help us make a better product. Great, some last
announcement for the pre-launch report. So, just to give you a
quick summary, the pre-launch report is Play solution for
automate the tests for the automated testing of your app.
And all that happens before production release. So
automatically, it crawls your APK on physical devices, and it
looks for crashes, and it looks for security vulnerabilities,
and also generates a whole bunch of screen shots that are then
across the multiple form factors and also across multiple
languages. We continue to improve that. So, first of all,
there is no need to opt in to the service any more. Every APK
that you upload to alpha or beta gets automatically tested and
is ready to review. Then we also rewrote the entire Robocrawler
so it goes deeper into the app and discovers more crashes which
would happen for real users on production. Last but not least
would be we do have performance data in the pre-launch report,
and now it reports on CPU, on memory, on network, and on
rendering. And the cool thing is that it – you can see this data
in direct relation to a video that was recorded throughout the
crawls. You can see directly where the issue occurred and you
can go and fix it. All right, so this was a brief overview
about the technical experience. Now back to Nick for product
excellence. N-so anyone can write an app that doesn’t crash,
right? You have a plain black screen, and it does nothing, and
you get no crashes. You need this foundation of technical
excellence, but you also need a great product, and so that’s
what I’m going to talk to you about but I’m not going to be
able to say that much here. I do encourage you, there’s loads of
amazing design sessions going on, or, if you’re watching
online, look online afterwards of the mobile design sessions.
You want to fulfil the users’ needs in an excellent way, and
I’m going to talk a few ways about doing that. They want
consistency, they want a memorable experience but they
also want a solution which matches their needs. So, how are
you going to write features that uses love? You want the
latest Android features to make it easy for you to give this to
users. And I’m going to tell you a
little bit about some of the tools we give you to get great
design in your app. First, there’s material design. If
you’re following Google, you might have seen this already but
it might be new to you. We’ve launched this a couple of years
ago. It’s been getting so much praise not just from the Android
community but independent designers and developers around
the world. Google has so many materials but also third-party
sites are offering loads of tutorials to help you understand
not just the code but all the concepts and the vision behind
this. Because you’ve got to remember just sticking in some
material design components doesn’t make your app a material
app. What makes it a material app is having these smooth
tangible surface transitions with meaningful motion. Do read
the docs. They’re so good, and they explain what it’s all
about. The other thing you can do is target the latest SDK.
Now, I know a lot of developers might say but my users are all
on older phones. I want maximum number compatibility. Why would
I target the latest SDK? This doesn’t mean you eliminate the
old phones but by changing your target SDK in your build, you
can give your users who are on new phones the best most modern,
most up-to-date experience, and you’ve got to remember, those
users who are on new phones, and, remember, Christmas is
coming up, an awful lot of phones will be sold in the next
few months with new versions of Android – and they will have the
best experience, and they will tell their friends about it and
get them installing your app. We don’t just want – of course,
there are other platforms in the world besides Android, and the
more of them you support, the better it is for your app
quality, so do features which help users on whatever platform
they’re on. Use things like sign in with Google, letting users
have one sign in whichever device they’re on which makes
life simpler. Another excellent memorable experience is do all
the platforms. Android isn’t just phones. Have you ever
paused, stopped for a minute and thought, “What are what other
platforms could my app support?” We’re going from Android,
phones, tablets, VR, even in cars nowadays as you saw in the
keynote. Could your app be there? Not every platform is
right for every app, of course, but if your fitness app, maybe
you can be on a watch so the user doesn’t have to carry their
phone with them? Maybe your education or entertainment.
Would that work well on a TV or maybe in VR? What if you’re a
travel app or a guidance app? Could that work really well in a
users’ car? How about a game. What about a game? Would that
work well on the TV. Think about your app. Would they work well
on the other platforms. The users of the platforms will get
a magical time, and this sort of quality is the thing we really
want in Play, and it will help you in the Play Store. Now,
monetisation is also really important, because, to be
honest, without money coming in, it is hard for you to fund the
development on your amazing app. Hopefully, you’ve looked at
in-app products but how many considered using subscriptions.
I’m sure you’ve heard of Pandora. They had their own
subscription model. When they switched to using subscriptions
in Google Play, their revenue went up 39 per cent, almost five
times as much money coming into their company from switching to
subscriptions in Google Play.
Everynote reduced the children by 40 per cent this this is all
going to get you extra money for your company and fund these
excellent apps. Now, I’ve been talking to a lot of new the
Sandbox, and a lot of you have never tried the A/B experiments.
You can do things like try which icon works better for installs. This one or this one? Let’s try
on one per cent of traffic and do a proper experiment to see
what works better. People have had amazing results. People have
used alpha beta channels to get feedback from their keen users,
and don’t forget tools to respond to users’ comments.
I had a developer in the Sandbox this morning come up to me this
morning and say users leave comments like they don’t really
make any sense. How do I respond to them. By going into the Play
Console and saying, “Thanks for that. I can help you. Give your
email address.” Users will see that and say this developer
cares about their users, and you will get more installs from it. Be friendly and helpful to every
comment and it will wonders not only to your start rating but
it will give you loads of ideas for improving app quality. One
example, don’t just believe me: they did a few experiments at
Lollipop. They changed their short description. Then they did
another experiment with different screen shots. Then
they even did things like what order should we put the screen
shots on on their listing? By doing a few experiments like
this one after the other, they got 17 per cent improvement. And
they are no way the highest in the Play Console. We’ve seen
developers get 30 per cent conversion installs, so this is
people who came to the app listing in Play and get 30 per
cent more installs by doing these experiments. It doesn’t
take very long. You should definitely try it. But the real
key to this is you need a clear hypothesis: I think this icon
will work better. I think encouraging this type of user
will work better. You need some big design variations. Don’t
just make, “I will go from light blue to dark blue, make some
really bold designs. Remember, you can limit the interesting
change it goes to, so it only goes to one per cent of users,
and you can get some really amazing results. So, let me
recap: your app quality affects your business’s success. This is
going to make real difference to the money you have coming in,
to the number of users you have installing, to the users coming
back to your app again and again and recommending it to
your friends. We measure quality using the same signals you get
in Android vitals. We’re not leaving you in the dark. If you
care about your recommendations in Play, and you care about your
users, you should be going to Android Vitals and fixing the
bad behaviours, the crashes, the ANRs you see there, so every
user in Android, on every app has an amazing thing. And it’s
not just technical excellence. Going and adopting the latest
phones and making your app have a beautiful design with things
that really meets the user’s needs can really help you drive
more traffic to your app. So, thank you so much for your time.
Do come and find us. We will be in the Sandbox tomorrow and the
launch party in a minute. We will answer any questions you
have about Google Play or Google Play Console. Look forward to
seeing you later. Enjoy the

Posts Tagged with…

Reader Comments

  1. Video game fans love roblox

    you should not bann youtubers that are just doing pranks you an not good at all you do not rule i am never watching youtube again

  2. Sahil Patel

    @ google developers India, what is this playback song called? I have heard it somewhere else too.
    Can you also upload the promo video that was shown right before the keynote? That animation and everything was really nice.

Write a Comment

Your email address will not be published. Required fields are marked *