Loading ...

Job content

Minimum qualifications:

  • PhD in Computer Science, a related field, or equivalent practical experience.
  • Experience owning and initiating research agendas.
  • One or more scientific publication submission(s) for conferences, journals, or public repositories.

Preferred qualifications:

  • PhD in the intersection of Computer Vision and Machine Learning.
  • Experience in coding.
  • Experience using 3D assets and rendering engines (e.g. cycles and differentiable rendering).
  • Knowledge of modern generative algorithms such as Generative Adversarial Networks (GANs) and/or diffusion models.
  • Knowledge of features that connect language and visual concepts (e.g. CLIP).

About the job

As an organization, Google maintains a portfolio of research projects driven by fundamental research, new product innovation, product contribution and infrastructure goals, while providing individuals and teams the freedom to emphasize specific types of work. As a Research Scientist, you’ll setup large-scale tests and deploy promising ideas quickly and broadly, managing deadlines and deliverables while applying the latest theories to develop new and improved products, processes, or technologies. From creating experiments and prototyping implementations to designing new architectures, our research scientists work on real-world problems that span the breadth of computer science, such as machine (and deep) learning, data mining, natural language processing, hardware and software performance analysis, improving compilers for mobile platforms, as well as core search and much more.

As a Research Scientist, you’ll also actively contribute to the wider research community by sharing and publishing your findings, with ideas inspired by internal projects as well as from collaborations with research programs at partner universities and technical institutes all over the world.

The Gentec team is a multidisciplinary team that combines unstructured in-the-wild data with high-quality structured data from our unique data acquisition systems and models (such as 3D human models) to develop and deploy new technologies across a wide range of applications. We are also researching and developing methodologies for all modern generative approaches, such as diffusion and GANs. The extended team is composed of experts from various disciplines, including computer vision, computer graphics, and machine learning.


The Platforms and Ecosystems product area encompasses Google’s various computing software platforms across environments (desktop, mobile, applications). The products provide enterprises, and ultimately end users, the ability to utilize and manage their services at scale. We build innovative and compelling software products—from apps to TVs, from laptops to phones—that have an impact on people’s lives across the world.

Responsibilities

  • Conduct cutting edge research in building generative foundation models that combine many different modalities such as language, 2D/3D landmarks, features, 3D assets, and more.
  • Publish the results in top venues (e.g., CVPR, ICCV, ECCV, NeurIPs, Siggraph, etc.).
  • Collaborate cross-functionally with other researchers, engineers, and technical artists, as well as product teams.
  • Build foundation models to be used as priors for describing, synthesizing, and animating digital humans.
Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google’s EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.
Loading ...
Loading ...

Deadline: 26-07-2024

Click to apply for free candidate

Apply

Loading ...
Loading ...