App Engine中urlfetch超过1M的问题

发布于 2024-10-07 02:21:19 字数 747 浏览 6 评论 0原文

要在应用程序引擎中 fetch() 超过 1M,我使用范围标头,然后组合这些部分。以及我的代码:

int startpos=0;
int endpos;
int seg=1;
int len=1;
while(len>0){
endpos=startpos+seg;
httpConn = (HttpURLConnection) u.openConnection();
httpConn.setRequestMethod("GET");
con.setRequestProperty("User-Agent", "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.8.1.14) Gecko/20080404 Firefox/2.0.0.14");

con.setRequestProperty("Range", "bytes=" + startpos + "-" + endpos);  
con.connect();
InputStream in=con.getInputStream();

len=con.getContentLength();
byte[] b=new byte[len];
in.read(b, 0, len);

startpos+=len; 

} 但是当它进入“InputStream in=con.getInputStream();”时,它的调试是“URL Fetch Response Too Large Problems” 所以我不知道这些代码有什么问题。 还有其他方法可以 fetch() 超过 1M 吗?

to fetch() over 1M in app engine,i use the range header and then combine those pieces.and my codes:

int startpos=0;
int endpos;
int seg=1;
int len=1;
while(len>0){
endpos=startpos+seg;
httpConn = (HttpURLConnection) u.openConnection();
httpConn.setRequestMethod("GET");
con.setRequestProperty("User-Agent", "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.8.1.14) Gecko/20080404 Firefox/2.0.0.14");

con.setRequestProperty("Range", "bytes=" + startpos + "-" + endpos);  
con.connect();
InputStream in=con.getInputStream();

len=con.getContentLength();
byte[] b=new byte[len];
in.read(b, 0, len);

startpos+=len; 

}
but when it goes to the "InputStream in=con.getInputStream();",its debug is " URL Fetch Response too large problems"
so i don't know what the wrong with these codes.
and there are other ways to fetch() over 1M?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

殊姿 2024-10-14 02:21:19

并非所有 HTTP 服务器都支持范围请求,尤其是当涉及到提供动态内容的框架时 - 它们会简单地忽略 Range 标头并向您发送整个响应。

不过,最近发布的 1.4.0 将 URLFetch 响应限制增加到 32MB,因此您不再需要这样做。

Not all HTTP servers support range requests, especially when it comes to frameworks serving dynamic content - they'll simply ignore the Range header and send you the whole response.

The recent release of 1.4.0 increased the URLFetch response limit to 32MB, though, so you no longer need to do this.

日暮斜阳 2024-10-14 02:21:19

我遇到了同样的问题,并编写了一个小类来使用 HTTP 范围参数模拟 Appengine 上的输入流。它允许您以面向行的方式读取大于限制的文件。我将其附在下面,尽管您可能需要根据您的目的对其进行调整:

package com.theodorebook.AEStreamer;

import java.io.InputStream;
import java.net.URL;
import java.net.URLConnection;
import java.util.Arrays;
import java.util.logging.Logger;

/**
 * A class to simulate a stream in appengine, which insists on downloading
 * an entire URL before letting you do anything with it.  This enables one
 * to read files larger than the size limits.
 * 
 * @author Theodore Book (theodorebook at gmail dot com)
 *
 */
public class AEStreamer {
    private static final int BITE_SIZE = 0x10000;   //How big a chunk to grab at a time
    private static final byte TERMINATOR = '\n';    //String terminator

    private int mCurrentPosition = 0;   //The current position in the file
    private int mOffset = -1;   //The offset of the current block
    private long mValidBytes = 0;   //The number of valid bytes in the chunk
    private byte[] mChunk = new byte[BITE_SIZE];
    private boolean mComplete = false;
    private String mURL;

    private static final Logger log = Logger.getLogger(AEStreamer.class.getName());

    public AEStreamer(String url) {
        mURL = url;
    }

    /**
     * Returns the next line from the source, or null on empty
     * @return
     */
    public String readLine() {
        String line = "";

        //See if we have something to read
        if (mCurrentPosition >= mOffset + mValidBytes) {
            if (mComplete)
                return null;
            readChunk();
        }
        if (mValidBytes == 0)
            return null;

        //Read until we reach a terminator
        int endPtr = mCurrentPosition - mOffset;
        while (mChunk[endPtr] != TERMINATOR) {
            endPtr++;

            //If we reach the end of the block
            if (endPtr == mValidBytes) {
                line += new String(Arrays.copyOfRange(mChunk, mCurrentPosition - mOffset, endPtr));
                mCurrentPosition += (endPtr - mCurrentPosition + mOffset);
                if (mComplete) {
                    return line;
                } else {
                    readChunk();
                    endPtr = mCurrentPosition - mOffset;
                }
            }
        }
        line += new String(Arrays.copyOfRange(mChunk, mCurrentPosition - mOffset, endPtr));
        mCurrentPosition += (endPtr - mCurrentPosition + mOffset);
        mCurrentPosition++;
        return line;
    }

    /**
     * Reads the next chunk from the server
     */
    private void readChunk() {
        if (mOffset < 0)
            mOffset = 0;
        else
            mOffset += BITE_SIZE;

        try {
            URL url = new URL(mURL);
            URLConnection request = url.openConnection();
            request.setRequestProperty("Range", "bytes=" + (mOffset + 1) + "-" + (mOffset + BITE_SIZE)); 
            InputStream inStream = request.getInputStream();
            mValidBytes = inStream.read(mChunk);
            inStream.close();
        } catch (Exception e) {
            log.severe("Unable to read " + mURL + ": " + e.getLocalizedMessage());
            mComplete = true;
            mValidBytes = 0;
            return;
        }

        if (mValidBytes < BITE_SIZE)
            mComplete = true;

        //log.info("Read " + mValidBytes + " bytes");
    }
}

I had the same problem and hacked up a little class to simulate an input stream on Appengine using the HTTP range parameter. It allows you to read files bigger then the limit in a line-oriented fashion. I am attaching it below, although you may need to adapt it for your purposes:

package com.theodorebook.AEStreamer;

import java.io.InputStream;
import java.net.URL;
import java.net.URLConnection;
import java.util.Arrays;
import java.util.logging.Logger;

/**
 * A class to simulate a stream in appengine, which insists on downloading
 * an entire URL before letting you do anything with it.  This enables one
 * to read files larger than the size limits.
 * 
 * @author Theodore Book (theodorebook at gmail dot com)
 *
 */
public class AEStreamer {
    private static final int BITE_SIZE = 0x10000;   //How big a chunk to grab at a time
    private static final byte TERMINATOR = '\n';    //String terminator

    private int mCurrentPosition = 0;   //The current position in the file
    private int mOffset = -1;   //The offset of the current block
    private long mValidBytes = 0;   //The number of valid bytes in the chunk
    private byte[] mChunk = new byte[BITE_SIZE];
    private boolean mComplete = false;
    private String mURL;

    private static final Logger log = Logger.getLogger(AEStreamer.class.getName());

    public AEStreamer(String url) {
        mURL = url;
    }

    /**
     * Returns the next line from the source, or null on empty
     * @return
     */
    public String readLine() {
        String line = "";

        //See if we have something to read
        if (mCurrentPosition >= mOffset + mValidBytes) {
            if (mComplete)
                return null;
            readChunk();
        }
        if (mValidBytes == 0)
            return null;

        //Read until we reach a terminator
        int endPtr = mCurrentPosition - mOffset;
        while (mChunk[endPtr] != TERMINATOR) {
            endPtr++;

            //If we reach the end of the block
            if (endPtr == mValidBytes) {
                line += new String(Arrays.copyOfRange(mChunk, mCurrentPosition - mOffset, endPtr));
                mCurrentPosition += (endPtr - mCurrentPosition + mOffset);
                if (mComplete) {
                    return line;
                } else {
                    readChunk();
                    endPtr = mCurrentPosition - mOffset;
                }
            }
        }
        line += new String(Arrays.copyOfRange(mChunk, mCurrentPosition - mOffset, endPtr));
        mCurrentPosition += (endPtr - mCurrentPosition + mOffset);
        mCurrentPosition++;
        return line;
    }

    /**
     * Reads the next chunk from the server
     */
    private void readChunk() {
        if (mOffset < 0)
            mOffset = 0;
        else
            mOffset += BITE_SIZE;

        try {
            URL url = new URL(mURL);
            URLConnection request = url.openConnection();
            request.setRequestProperty("Range", "bytes=" + (mOffset + 1) + "-" + (mOffset + BITE_SIZE)); 
            InputStream inStream = request.getInputStream();
            mValidBytes = inStream.read(mChunk);
            inStream.close();
        } catch (Exception e) {
            log.severe("Unable to read " + mURL + ": " + e.getLocalizedMessage());
            mComplete = true;
            mValidBytes = 0;
            return;
        }

        if (mValidBytes < BITE_SIZE)
            mComplete = true;

        //log.info("Read " + mValidBytes + " bytes");
    }
}
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文