Posted:


Last July, Google and the Institute of Electrical and Electronics Engineers Power Electronics Society (IEEE PELS) announced the Little Box Challenge, a competition designed to push the forefront of new technologies in the research and development of small, high power density inverters.

In parallel, we announced the Little Box Challenge award program designed to help support academics pursuing groundbreaking research in the area of increasing the power density for DC-­to­-AC power conversion. We received over 100 proposals and today we are proud to announce the following recipients of the academic awards:

Primary Academic Institution
Principal Investigator
University of Colorado Boulder
National Taiwan University of Science and Technology
Universidad Politécnica de Madrid
Texas A&M University
ETH Zürich
University of Bristol
Case Western Reserve University
University of Illinois Urbana-Champaign
University of Stuttgart
Queensland University of Technology

The recipients hail from many different parts of the world and were chosen based on their very strong and thoughtful entries dealing with all the issues raised in the request for proposals. Each of these researchers will receive approximately $30,000 US to support their research into high power density inverters, and are encouraged to use this work to attempt to win the $1,000,000 US grand prize for the Little Box Challenge.

There were many submissions beyond those chosen here that reviewers also considered to be very promising. We encourage all those who did not receive funding to still participate in the Little Box Challenge, and pursue improvements not only in power density, but also in the reliability, efficiency, safety, and cost of inverters (and of course, to attempt to win the grand prize!)

Posted:


Imagine a world in which access to networked technology defies the constraints of desktops, laptops or smartphones. A future where we work seamlessly with connected systems, services, devices and “things” to support work practices, education, and daily interactions. While the Internet of Things (IoT) conjures a vision of “anytime, any place” connectivity for all things, the realization is complex given the need to work across interconnected and heterogeneous systems, and the special considerations needed for security, privacy, and safety.

Google is excited about the opportunities the IoT presents for future products and services. To further the development of open standards, facilitate ease of use, and ensure that privacy and security are fundamental values throughout the evolution of the field, we are in the process of establishing an open innovation and research program around the IoT. We plan to bring together a community of academics, Google experts and potentially other parties to pursue an open and shared mission in this area.

As a first step, we are announcing an open call for research proposals for the Open Web of Things:

  • Researchers interested in the Expedition Lead Grant should build a team of PIs and put forward a proposal outlining a draft research roadmap both for their team(s), as well as how they propose to integrate related research that is implemented outside their labs (e.g., Individual Project Grants).
  • For the Individual Project Grants we are seeking research proposals relating to the IoT in the following areas (1) user interface and application development, (2) privacy & security, and (3) systems & protocols research.

Importantly, we are open to new and unorthodox solutions in all three of these areas, for example, novel interactions, usable security models, and new approaches for open standards and evolution of protocols.

Additionally, to facilitate hands-on research supporting our mission driven research, we plan to provide participating faculty access to hardware, software and systems from Google. We look forward to your submission by January 21, 2015 and expect to select proposals early Spring. Selected PIs will be invited to participate in a kick-off workshop at Google shortly after.

Posted:


According to Eurostat data, over 5 million people under age 25 are currently out of work in Europe, in contrast to an increasing demand for people with digital skills such as Digital Marketing, Big Data, Ecommerce, Mobile App Development and Cloud Computing. In particular, Spanish employers are finding it difficult to find individuals with the right skills, due to the lack of the digital education available.

In an effort to make contributions towards solving Spain’s unemployment in this sector, Google Spain, the Spanish Ministry of Industry through their business school EOI, Universidad Complutense de Madrid and Interactive Advertising Bureau (IAB) are collaborating to build Google Activate, a series of massive open online courses (MOOCs) dedicated to teach digital skills to the young unemployed people in Spain. This is an example of how online education can be scaled to address educational and economic issues.

The inspiration for Google Activate began with the summer 2012 launch of Course Builder, an experimental platform developed on Google technologies designed to provide the capability for anyone to create an online environment that can be used for a wide variety of education-related activities. In September of that same year, Course Builder was made available in Europe, as part of the Google Faculty Summit in London.

Among the early adopters of Course Builder in Europe was a partnership that included the University of Alicante, who in October 2012 launched Unimooc Aemprende, a MOOC for entrepreneurs. This is just one example of the use of Course Builder to build a MOOC designed to solve a broad problem, in this case the acquisition of skills for launching a small business. More than 30,000 people have participated in Unimooc since its launch.

As of today, more than 148,000 people have registered for Activate with 13% of participants earning a certificate, which is obtained after 13 exams certified by either the EOI, Universidad Complutense de Madrid or or the IAB (Interactive Advertising Bureau). Such certificates are being used by the awardees in their LinkedIn profile to position themselves for a job in the digital economy, where many jobs are being created. More than 19,000 students are already certified in one of the 5 digital areas.

Google Activate has plans to increase the number of students with digital skills reaching 160,000 with plans to expand further to other countries in the world.

Posted:


Recently, Tsinghua University and Google collaborated to host the 2014 APAC MOOC Focused Faculty Workshop in Shanghai, China. The workshop brought together 37 professors from 12 countries in APAC, NA and EMEA to share, brainstorm and generate important topics that are of mutual interests in the research behind MOOCs and how to foster MOOC innovation.

During the 2-day workshop, faculty and Googlers shared lessons learned and best practices for the following focus areas:
  • Effectiveness of hybrid learning models.
  • Topics in adaptive learning and how they can tailor to individual students by Integrating MOOCs into a student's timetable / semester / curriculum.
  • Standards and practices for interoperability between online learning platforms.
  • Current focuses and important topics for future MOOC research.

In addition to discussing these focus areas, here was ample time for participants to brainstorm and discuss innovative research ideas for the next-steps in potential research collaboration. Emerging from these discussions were the following themes identified as important future research topics:
  • Adding new interactions to MOOCs including social and gamification
  • Building a data & analytics Infrastructure that provides a foundation for personalized learning
  • Interoperability across platforms, and providing access to online content for audiences with limited access.

Google is committed to supporting research and innovation in online learning at scale, through both grants and our open source Course Builder platform, and we are excited to pursue potential research collaborations with partner universities to move forward on the topics discussed. Stay tuned for future announcements on research and collaboration aimed at enabling further MOOC innovation.

Posted:


The ILSVRC detection challenge is an influential academic benchmark for measuring the quality of object detection. This summer, the GoogLeNet team reported top results in the 2014 edition of the challenge, with ~2X improvement over the previous year’s best results. However, the quality of our results came at a high computational cost: processing each image took about two minutes on a state-of-the-art workstation.

Naturally, we began to think of how we could both improve the accuracy and reduce the computation time needed. Given the already high quality of previous results like those of GoogLeNet[6], we expected that further improvements to detection quality would be increasingly hard to achieve. In our recent paper Scalable, High Quality Object Detection[7], we detail advances that instead have resulted in an accelerated rate of progress in object detection:
Evolution of detection quality over time. On the y axis is the mean average precision of the best published results at any given time. The blue line shows result using individual models, the red line is multi-model ensembles. Overfeat[8] was the state-of-the-art at end of last year, followed by R-CNN[1] published in May. The later measurement points are the results of our team.[6,7]
As seen in the plot above, the mean average precision has been improved since August from 0.45 to 0.56: a 23% relative gain. The new approach can also match the quality of the former best solution with 140X reduced computational resources.

Most current approaches for object detection employ two phases[1]: in the first phase, some hand-engineered algorithm proposes regions of interest in the image. In the second phase, each proposed region is run through a deep neural network, identifying which proposed patches correspond to an object (and what that object is).

For the first phase, the common wisdom[1,2,3,4] was that it took skillfully crafted code to produce high quality region proposals. This has come with a drawback though: these methods don’t produce reliable scoring for the proposed regions. This forces the second phase to evaluate most of the proposed patches in order to achieve good results.

So we revisited our prior “MultiBox” work[5], in which we let the computer learn to pick the proposals to see whether we could avoid relying on any of the hand-crafted methods above. Although the MultiBox method, using previous generation vision network architectures, could not compete with hand-engineered proposal approaches, there were several advantages of fully relying on machine learning only. First, the quality of proposals increases with each new improved network architecture or training methodology without additional programming effort. Second, the regions come with confidence scores which are used for trading off running time versus quality. Additionally, the implementation is simplified.

Once we used new variants of the network architecture introduced in [6], MultiBox also started to perform much better; Now, we could match the coverage of alternative methods with half as many proposal patches. Also, we changed our networks to take the context of objects into account, fueling additional quality gains for the second phase. Furthermore, we came up with a new way to train deep networks to learn more robustly even when some objects are not annotated in the training set, which improved both phases.

Besides the significant gains in mean average precision, we can now cut the number of evaluated patches dramatically at a modest loss of quality: the task that used to take 2 minutes of processing time for a single image on a workstation by the GoogLeNet ensemble (of 6 networks), is now performed under a second using a single network without using GPUs. If we constrain ourselves to a single category like “dog”, we can now process 50 images/second on the same machine by a more streamlined approach[7] that skips the proposal generation step altogether.

As a core area of research in computer vision, object detection is used for providing strong signals for photo and video search, while high quality detection could prove useful for self-driving cars and automatically generated image captions. We look forward to the continuing research in this field.

References:

[1]  Rich feature hierarchies for accurate object detection and semantic segmentation
by Ross Girshick and Jeff Donahue and Trevor Darrell and Jitendra Malik (CVPR, 2014)

[2]  Prime Object Proposals with Randomized Prim’s Algorithm
by Santiago Manen, Matthieu Guillaumin and Luc Van Gool

[3]  Edge boxes: Locating object proposals from edges
by Lawrence C Zitnick, and Piotr Dollàr (ECCV 2014)

[4]  BING: Binarized normed gradients for objectness estimation at 300fps
by Ming-Ming Cheng, Ziming Zhang, Wen-Yan Lin and Philip Torr (CVPR 2014)

[5]  Scalable Object Detection using Deep Neural Networks
by Dumitru Erhan, Christian Szegedy, Alexander Toshev, and Dragomir Anguelov

[6]  Going deeper with convolutions
by Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke and Andrew Rabinovich

[7]  Scalable, high quality object detection
by Christian Szegedy, Scott Reed, Dumitru Erhan and Dragomir Anguelov

[8]  OverFeat: Integrated Recognition, Localization and Detection using Convolutional Network by Pierre Sermanet, David Eigen, Xiang Zhang, Michael Mathieu, Rob Fergus and Yann LeCun


* A PhD student at University of Michigan -- Ann Arbor and Software Engineering Intern at Google

Posted:


By becoming more conscious of our own stereotypes and biases, and making use of the insights revealed by the research on bias and stereotype threat, unconscious decision making, and cognitive illusions, each of us can bring more to our work and create diverse, innovative, and meaningful organizations.

Since 2009, I’ve been reading literature about the challenges and successes in making diverse teams effective, and speaking about this research. My goal is to help everyone understand more about unconscious decision-making and other barriers to inclusion, and through knowledge, combat these effects.

A short summary:
  • A team that is heterogeneous in meaningful ways is good for innovation, and good for business.
  • There are many challenges to making such teams effective, such as unconscious decision making, stereotype threat, and other cognitive illusions.
  • There is repeatable quantitative research which shows ways to combat some of these effects.
  • The barriers to effectiveness may seem overwhelming, but there is hope! Meaningful change is possible, and some examples of successful change are cited below.
In a bit more detail:
  1. Diversity is good for innovation and business. There is a correlation between financial success and the diversity of leadership teams, as shown in research by Catalyst, McKinsey and Cedric Herring. Further, research shows a strong correlation between having women on teams and innovation; concluding that there is a strong correlation between the presence of women and the social skills required to get ideas percolating into the open.
  2. We all make decisions unconsciously, influenced by our implicit associations. As an example of these effects, a large proportion of CEOs are taller than the average population and height is strongly correlated with financial and career success. It’s long been argued that women and underrepresented minorities are not represented in CEO leadership because there aren’t enough qualified individuals in the labor pool. This “pipeline issue” argument can’t be made for short and average-height people, however. Simple, repeatable tests measure, via response time and error rate, the implicit associations we have between concepts. These associations are created as an adaptive response, but we must understand our own implicit biases in order to make better decisions.
  3. Stereotype threat plays a role in preventing people from being fully effective. The low representation of women and minorities in Science has long been the source of a troubling question: is this an indication of a difference in innate ability (see Ben Barre’s response to Lawrence Summers’ remarks), or the result of some other effect? Claude Steele and his colleagues elegantly showed that two groups of people can have similar or opposite reactions, depending on the way a situation is presented. These and other experiments show that stereotype threat can compromise the performance of the subject of a stereotype, if he or she knows about the stereotype and cares about it.
  4. Change is possible. The above and other challenges may make it seem nearly impossible to create a diverse and highly functioning organization, but dramatic change can be made. Take, for example, the discovery of biased decision making and effective changes made via the use of data in the MIT Science Faculty Study, or the amazing changes at Harvey Mudd college, which not only increased participation of women as Computer Science majors from 12% to 40% in five years, but also increased the total number of CS majors from 25 to 30 per year to 70 CS graduates in the class of 2014.
If you’re interested in learning more, watch the video about the data on diversity below. You can read the full research in the November issue of Communications of the Association of Computing Machinery. You can read even more using the full bibliography.

Posted:


While the availability and size of data sets across a wide range of sources, from medical to scientific to commercial, continues to grow, there are relatively few people trained in the statistical and machine learning methods required to test hypotheses, make predictions, and otherwise create interpretable knowledge from this data. But what if one could automatically discover human-interpretable trends in data in an unsupervised way, and then summarize these trends in textual and/or visual form?

To help make progress in this area, Professor Zoubin Ghahramani and his group at the University of Cambridge received a Google Focused Research Award in support of The Automatic Statistician project, which aims to build an "artificial intelligence for data science".

So far, the project has mostly been focussing on finding trends in time series data. For example, suppose we measure the levels of solar irradiance over time, as shown in this plot:This time series clearly exhibits several sources of variation: it is approximately periodic (with a period of about 11 years, known as the Schwabe cycle), but with notably low levels of activity in the late 1600s. It would be useful to automatically discover these kinds of regularities (as well as irregularities), to help further basic scientific understanding, as well as to help make more accurate forecasts in the future.

We can model such data using non-parametric statistical models based on Gaussian processes. Such methods require the specification of a kernel function which characterizes the nature of the underlying function that can accurately model the data (e.g., is it periodic? is it smooth? is it monotonic?). While the parameters of this kernel function are estimated from data, the form of the kernel itself is typically specified by hand, and relies on the knowledge and experience of a trained data scientist.

Prof Ghahramani's group has developed an algorithm that can automatically discover a good kernel, by searching through an open-ended space of sums and products of kernels as well as other compositional operations. After model selection and fitting, the Automatic Statistician translates each kernel into a text description describing the main trends in the data in an easy-to-understand form.

The compositional structure of the space of statistical models neatly maps onto compositionally constructed sentences allowing for the automatic description of the statistical models produced by any kernel. For example, in a product of kernels, one kernel can be mapped to a standard noun phrase (e.g. ‘a periodic function’) and the other kernels to appropriate modifiers of this noun phrase (e.g. ‘whose shape changes smoothly’, ‘with growing amplitude’). The end result is an automatically generated 5-15 page report describing the patterns in the data with figures and tables supporting the main claims. Here is an extract of the report produced by their system for the solar irradiance data:
Extract of the report for the solar irradiance data, automatically generated by the automatic statistician.
The Automatic Statistician is currently being generalized to find patterns in other kinds of data, such as multidimensional regression problems, and relational databases. A web-based demo of a simplified version of the system was launched in August 2014. It allowed a user to upload a dataset, and to receive an automatically produced analysis after a few minutes. An expanded version of the service will be launched in early 2015 (we will post details when available). We believe this will have many applications for anyone interested in Data Science.