Gemini 1.0 Pro
Designed to balance quality, performance, and cost for tasks such as content generation, editing, summarization, and classificationGemini 1.0 Pro is the name of the Gemini large language model that understands and generates language. It's a foundation model that performs well at a variety of natural language tasks such as summarization, instruction following, content generation, sentiment analysis, entity extraction, classification etc. The type of content that Gemini 1.0 Pro can create includes document summaries, answers to questions, labels that classify content, and more.
You can interact with Gemini 1.0 Pro using a single-turn prompt and response or chat with it in a multi-turn, continuous conversation, even for code understanding and generation.
The Gemini model family has multiple model sizes and capabilities. View the other Gemini model options:
Model name | Input data | Output data | Description |
---|---|---|---|
Gemini 1.5 Flash | Text, image, video, documents and audio | Text | A lightweight model optimized for speed and efficiency. Good for multimodal, high-volume tasks and latency-sensitive applications. |
Gemini 1.5 Pro | Text, image, video, documents and audio | Text | Created to be multimodal (text, images, audio, documents, code, videos) and to scale across a wide range of tasks with up to 1M input tokens |
Gemini 1.0 Pro | Text | Text | Designed to balance quality, performance, and cost for tasks such as content generation, editing, summarization, and classification |
Gemini 1.0 Pro Vision | Image and text | Text | Created to be multimodal (text, images, code) and to scale across a wide range of tasks |
Try a sample to test the capabilities of the Gemini 1.0 Pro model. To learn more, see Design text prompts.
![]() |
Summarize financial table insights.
"Act as a financial analyst to summarize the key insights of given numerical tables."
Try this prompt |
![]() |
Extract stock price table to JSON.
"Extract the stock daily open, close prices, as well as the high and low prices of the given date range into JSON formats."
Try this prompt |
![]() |
Summarize hotel reviews.
"You are looking for a hotel to book for your family of three in Port Orchard, WA. Please summarize the reviews into two bulleted lists labeled 'Pros' and 'Cons'."
Try this prompt |
![]() |
Generate reading test questions.
"Generate five questions that test reader comprehension of a given text paragraph."
Try this prompt |
![]() |
Hotel brand strategy.
"Act as a brand strategist tasked with launching a new boutique hotel chain that caters exclusively to avid book lovers and literary enthusiasts."
Try this prompt |
You can use Vertex AI Studio to experiment with Gemini 1.0 Pro in the Google Cloud console. You can also use the command line or integrate it in your application using Python.
Enable the Vertex AI API.
For more information on getting set up on Google Cloud, see Get set up on Google Cloud.
To use Gemini 1.0 Pro in Vertex AI Studio, click Open in Vertex AI Studio. In Vertex AI Studio, you can enter a sample text prompt then click Submit to view the output generated by Gemini 1.0 Pro.
These instructions don't apply if you're using express mode for Google Cloud. If you're using express mode, follow the instructions for express mode instead.
The following is a sample prompt to the model. To learn more about the possible request parameters, see the Gemini API reference.
Request JSON body:
Save the request body in a file named request.json
and then execute the following command in Cloud Shell or a local terminal window with the gcloud CLI installed. Replace YOUR_PROJECT_ID
with your Google Cloud project ID.
Follow these instructions only if you're using express mode for Google Cloud.
The following is a sample prompt to the model. To learn more about the possible request parameters, see the Gemini API reference.
Request JSON body:
Save the request body in a file named request.json
and then execute the following command in Cloud Shell or a local terminal window with the gcloud CLI installed.
Replace YOUR_API_KEY
with the API key that you created for using express mode.
These instructions don't apply if you're using express mode for Google Cloud. If you're using express mode, follow the instructions for express mode instead.
Before trying this sample, follow the Python setup instructions in the Google Gen AI SDK quickstart using client libraries.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
YOUR_PROJECT_ID
with your Google Cloud project ID.For more information, see the Gemini SDK reference.
Follow these instructions only if you're using express mode for Google Cloud.
Before trying this sample, follow the Python setup instructions in the Google Gen AI SDK quickstart using client libraries.
YOUR_API_KEY
with your API key.For more information, see the Gemini SDK reference.
The Gemini Chat Completions API lets you send requests to the Vertex AI Gemini API by using the OpenAI libraries for Python and REST. If you are already using the OpenAI libraries, you can use this API to switch between calling OpenAI models and Gemini models to compare output, cost, and scalability, without changing your existing code. If you are not already using the OpenAI libraries, we recommend that you call the Gemini API directly. To learn more, view the documentation.
Start by installing the OpenAI SDK:
Next, you can either modify your client setup or change your environment configuration to use Google authentication and a Vertex AI endpoint.
To programmatically get Google credentials in Python, you can use the
google-auth
Python SDK:
Change the OpenAI SDK to point to the Vertex AI chat completions endpoint:
By default, access tokens last for 1 hour. You can extend the life of your access token or periodically refresh your token and update the openai.api_key
variable.
The OpenAI SDK can read the OPENAI_API_KEY
and OPENAI_BASE_URL
environment
variables to change the authentication and endpoint in their default client.
After you have installed gcloud,
set the following variables, replacing YOUR_PROJECT_ID
and YOUR_LOCATION
:
Next, initialize the client:
OpenAI uses an API key to authenticate their requests. When you use the API with Google Cloud, you use an OAuth credential, such as a service account token, which is a short-lived access token. By default, access tokens last for 1 hour. You can extend the life of your access token or periodically refresh your token and update the OPENAI_API_KEY
environment variable.
The sample below is for a unary (non-streaming) request:
You should receive a response similar to the following:
Resource ID | Release date | Release stage | Description |
---|---|---|---|
gemini-1.0-pro-002 | 2024-04-09 | General Availability | |
gemini-1.0-pro-001 | 2024-02-15 | General Availability | |
gemini-1.0-pro | 2024-02-15 | General Availability |
Google Cloud Console has failed to load JavaScript sources from www.gstatic.com.
Possible reasons are: