g4f is very similar to the OpenAI Python Sdk, the g4f completion syntax was made to work exactly like the sdk with some ClassNames being different.

The Client Class

As of the new v1 library of the OpenAi sdk, g4f is initialised with the g4f.client.Client class.

from g4f.client import Client

client = Client()

The Clientclass accepts several arguments, you can set a default provider there. More on the Provider structure of g4f further down.

from g4f.client   import Client
from g4f.Provider import You

client = Client(
    provider = You
)

Client(...) Class Arguments

provider
ProviderType
default: "None"

The provider to use for the client. If not specified, the default provider will be used.

image_provider
ImageProvider
default: "None"

The image provider provider to use for the client. If not specified, the default provider will be used.

Create completions

Once initialised, you can use the client object to create chat completions. Here again, client.chat.completions.create works like the OpenAI sdk, with the same parameters.

chat_completion = client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[{"role": "user", "content": "Hello"}],
)

print(chat_completion.choices[0].message.content or "")

create(...) basic Params

(For more advanced params and usage and examples, refer to the Full Documentation)

messages
list
required

A list of messages to use for the chat completion. Each message should be a dictionary with the keys role and content. It represents the conversation between a user and an assistant.

role
str
default: "system|user|assistant"required
content
str
required
[
  {
    "role": "user",
    "content": "Hello"
  }
]
provider
ProviderType
default: "Auto"

The provider to use for the chat completion. Imported from g4f.Provider.ProviderName. Different working providers are outlined in the g4f README.

stream
bool
default: "False"

If set, the completion will be streamed, i.e. Tokens will be sent sequentially as soon as generated.
create will return a generator.

Stream completions

You may want to use g4f with streaming Response, you can do this with stream=True in the client.chat.completions.create method.

chat_completion = client.chat.completions.create(model='gpt-3.5-turbo',
    messages=[{"role": "user", "content": "Hello"}], 
    stream=True
)

for completion in chat_completion:
    print(completion.choices[0].delta.content or "")

If you dont want the words to start at a newline every time:

for completion in chat_completion:
    print(completion.choices[0].delta.content or "", end="", flush=True)

Full g4f example:

from g4f.client import Client

client = Client()

chat_completion = client.chat.completions.create(model="gpt-3.5-turbo",
    messages=[{"role": "user", "content": "Hello"}], stream=True)

for completion in chat_completion:
    print(completion.choices[0].delta.content or "", end="", flush=True)
user@mac-air:~$ python3 main.py
Hi
 there
!
 How
 can
 I
 assist
 you
 today
?

For more advanced usage and examples, refer to the Full Documentation.