OpenAI API拒绝设置不安全的标题“用户代理”

发布于 2025-01-31 10:59:01 字数 1104 浏览 4 评论 0原文

我不明白为什么我会收到此错误。

拒绝设置不安全的标题“用户代理”

我正在尝试将OpenAI的API用于个人项目。我不明白为什么它拒绝设置这个“不安全的标题”,以及如何或是否可以使其安全。我已经尝试搜索此问题,最佳链接是一个GitHub论坛,该论坛解释了Chrome可能是如何使用的,但是我试图在Safari中使用该应用程序,但也无法使用。

const onFormSubmit = (e) => {
    e.preventDefault();

    const formData = new FormData(e.target),
      formDataObj = Object.fromEntries(formData.entries())
    console.log(formDataObj.foodDescription);

    //////OPENAI
    const configuration = new Configuration({
      apiKey: process.env.REACT_APP_OPENAI_API_KEY,
    });
    const openai = new OpenAIApi(configuration);

    openai.createCompletion("text-curie-001", {
      prompt: `generate food suggestions from the following flavor cravings: ${formDataObj.foodDescription}`,
      temperature: 0.8,
      max_tokens: 256,
      top_p: 1,
      frequency_penalty: 0,
      presence_penalty: 0,
    })
    .then((response) => {
      setState({
        heading: `AI Food Suggestions for: ${formDataObj.foodDescription}`,
        response: `${response.data.choices[0].text}`
      });
    })
  }

I don't understand why I am receiving this error.

Refused to set unsafe header "User-Agent"

I am trying to use OpenAI's API for a personal project. I don't understand why it's refusing to set this "unsafe header" and how, or if, I can make it safe. I've tried googling this issue and the top link is for a GitHub forum that explains how it might be something that Chrome does but, I tried to use the app in Safari and it wouldn't work either.

const onFormSubmit = (e) => {
    e.preventDefault();

    const formData = new FormData(e.target),
      formDataObj = Object.fromEntries(formData.entries())
    console.log(formDataObj.foodDescription);

    //////OPENAI
    const configuration = new Configuration({
      apiKey: process.env.REACT_APP_OPENAI_API_KEY,
    });
    const openai = new OpenAIApi(configuration);

    openai.createCompletion("text-curie-001", {
      prompt: `generate food suggestions from the following flavor cravings: ${formDataObj.foodDescription}`,
      temperature: 0.8,
      max_tokens: 256,
      top_p: 1,
      frequency_penalty: 0,
      presence_penalty: 0,
    })
    .then((response) => {
      setState({
        heading: `AI Food Suggestions for: ${formDataObj.foodDescription}`,
        response: `${response.data.choices[0].text}`
      });
    })
  }

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

素罗衫 2025-02-07 10:59:01

如您所述,您正在收到错误,因为OpenAI API客户端“拒绝设置不安全的标题”“用户代理”。由于使用它需要访问敏感信息(API键),因此NodeJS客户端有意将交叉原点限制为防止意外揭示

解决方法,请参阅 https://github.com/ 请求完成。

-Node/essugn/6 Amankishore手动

const DEFAULT_PARAMS = {
  "model": "text-davinci-002",
  "temperature": 0.7,
  "max_tokens": 256,
  "top_p": 1,
  "frequency_penalty": 0,
  "presence_penalty": 0
}

export async function query(params = {}) {
  const params_ = { ...DEFAULT_PARAMS, ...params };
  const requestOptions = {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'Authorization': 'Bearer ' + String(openai_api_key)
    },
    body: JSON.stringify(params_)
  };
  const response = await fetch('https://api.openai.com/v1/completions', requestOptions);
  const data = await response.json();
  return data.choices[0].text;
}

As you stated, you're recieving the error because the openai API client "Refused to set unsafe header "User-Agent". Since using it requires access to sensitive information (the API key), the nodejs client intentionally restricts cross origin requests to prevent accidentally revealing secrets.

For a workaround, see https://github.com/openai/openai-node/issues/6 where AmanKishore manually requests completions.

I ended up writing my own completion function like so:

const DEFAULT_PARAMS = {
  "model": "text-davinci-002",
  "temperature": 0.7,
  "max_tokens": 256,
  "top_p": 1,
  "frequency_penalty": 0,
  "presence_penalty": 0
}

export async function query(params = {}) {
  const params_ = { ...DEFAULT_PARAMS, ...params };
  const requestOptions = {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'Authorization': 'Bearer ' + String(openai_api_key)
    },
    body: JSON.stringify(params_)
  };
  const response = await fetch('https://api.openai.com/v1/completions', requestOptions);
  const data = await response.json();
  return data.choices[0].text;
}
难如初 2025-02-07 10:59:01

这对我有用,但这取决于配置类的实现详细信息:

// This hardcodes insertion of 'User-Agent'
let config = new Configuration({ apiKey: key,});

// Delete it
delete config.baseOptions.headers['User-Agent'];

let api = new OpenAIApi(config);

This worked for me, but it depends on implementation details of the Configuration class:

// This hardcodes insertion of 'User-Agent'
let config = new Configuration({ apiKey: key,});

// Delete it
delete config.baseOptions.headers['User-Agent'];

let api = new OpenAIApi(config);

七七 2025-02-07 10:59:01

使用Jacobs答案作为参考,这是GPT 3.5 Turbo API的解决方法。

const [chatList, setChatList] = ([]) // ur chat history
async function createCompletion(params = {}) {
        const DEFAULT_PARAMS = {
            model: "gpt-3.5-turbo",
            messages: [{ role: "user", content: "Hello World" }],
            // max_tokens: 4096,
            temperature: 0,
            // frequency_penalty: 1.0,
            // stream: true,
        };
        const params_ = { ...DEFAULT_PARAMS, ...params };
        const result = await fetch('https://api.openai.com/v1/chat/completions', {
            method: 'POST',
            headers: {
                'Content-Type': 'application/json',
                'Authorization': 'Bearer ' + String(your_api_key)
            },
            body: JSON.stringify(params_)
        });
        const stream = result.body
        const output = await fetchStream(stream);
        setChatList(previousInputs => (previousInputs.concat(output.choices[0].message)));
    }

需要包含一个函数fetchstream(),因为OpenAPI响应返回了需要通过递归函数来处理的readableSream。

 async function fetchStream(stream) {
    const reader = stream.getReader();
    let charsReceived = 0;
    const li = document.createElement("li");

    // read() returns a promise that resolves
    // when a value has been received
    const result = await reader.read().then(
        function processText({ done, value }) {
            // Result objects contain two properties:
            // done  - true if the stream has already given you all its data.
            // value - some data. Always undefined when done is true.
            if (done) {
                console.log("Stream complete");
                return li.innerText;
            }
            // value for fetch streams is a Uint8Array
            charsReceived += value.length;
            const chunk = value;
            console.log(`Received ${charsReceived} characters so far. Current chunk = ${chunk}`);
            li.appendChild(document.createTextNode(chunk));
            return reader.read().then(processText);
        });
    const list = result.split(",")
    const numList = list.map((item) => {
        return parseInt(item)
    })
    const text = String.fromCharCode(...numList);
    const response = JSON.parse(text)
    return response
}

Using Jacobs answer as reference, here is the workaround for the GPT 3.5 Turbo API.

const [chatList, setChatList] = ([]) // ur chat history
async function createCompletion(params = {}) {
        const DEFAULT_PARAMS = {
            model: "gpt-3.5-turbo",
            messages: [{ role: "user", content: "Hello World" }],
            // max_tokens: 4096,
            temperature: 0,
            // frequency_penalty: 1.0,
            // stream: true,
        };
        const params_ = { ...DEFAULT_PARAMS, ...params };
        const result = await fetch('https://api.openai.com/v1/chat/completions', {
            method: 'POST',
            headers: {
                'Content-Type': 'application/json',
                'Authorization': 'Bearer ' + String(your_api_key)
            },
            body: JSON.stringify(params_)
        });
        const stream = result.body
        const output = await fetchStream(stream);
        setChatList(previousInputs => (previousInputs.concat(output.choices[0].message)));
    }

There was a need to include a function fetchStream() as the openapi response returned a readableStream which needed to be handled through a recursive function.

 async function fetchStream(stream) {
    const reader = stream.getReader();
    let charsReceived = 0;
    const li = document.createElement("li");

    // read() returns a promise that resolves
    // when a value has been received
    const result = await reader.read().then(
        function processText({ done, value }) {
            // Result objects contain two properties:
            // done  - true if the stream has already given you all its data.
            // value - some data. Always undefined when done is true.
            if (done) {
                console.log("Stream complete");
                return li.innerText;
            }
            // value for fetch streams is a Uint8Array
            charsReceived += value.length;
            const chunk = value;
            console.log(`Received ${charsReceived} characters so far. Current chunk = ${chunk}`);
            li.appendChild(document.createTextNode(chunk));
            return reader.read().then(processText);
        });
    const list = result.split(",")
    const numList = list.map((item) => {
        return parseInt(item)
    })
    const text = String.fromCharCode(...numList);
    const response = JSON.parse(text)
    return response
}
白色秋天 2025-02-07 10:59:01

我有一些问题。这个代码对我有用!

const configuration = new Configuration({
  apiKey: "YOURE_OPENAI_KEY",
  organization: "YOURE_OPENAI_OGRANIZATION",
});

configuration.baseOptions.headers = {
  Authorization: "Bearer " + "YOURE_OPENAI_KEY",
};

I have some problem. And this code work for me!

const configuration = new Configuration({
  apiKey: "YOURE_OPENAI_KEY",
  organization: "YOURE_OPENAI_OGRANIZATION",
});

configuration.baseOptions.headers = {
  Authorization: "Bearer " + "YOURE_OPENAI_KEY",
};
泛滥成性 2025-02-07 10:59:01

如果我们从前端 /客户端而不是安全的后端 /服务器端调用OpenAI,则会发生此错误。

This error occurs if we call OpenAI from the frontend / client-side instead of the secure backend / server-side.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文