使用 Indy 组件下载、暂停和恢复下载

发布于 2024-09-03 14:35:13 字数 802 浏览 5 评论 0原文

实际上我正在使用 TIdHTTP 组件从互联网下载文件。我想知道是否可以使用此组件或另一个 indy 组件暂停和恢复下载。

这是我当前的代码,这可以正常下载文件(没有简历),但是 .现在我想暂停下载关闭我的应用程序,当我的应用程序重新启动时,然后从上次保存的位置恢复下载。

var
  Http: TIdHTTP;
  MS  : TMemoryStream;
begin
  Result:= True;
  Http  := TIdHTTP.Create(nil);
  MS    := TMemoryStream.Create;
  try

    try
      Http.OnWork:= HttpWork;//this event give me the actual progress of the download process
      Http.Head(Url);
      FSize := Http.Response.ContentLength;
      AddLog('Downloading File '+GetURLFilename(Url)+' - '+FormatFloat('#,',FSize)+' Bytes');
      Http.Get(Url, MS);
      MS.SaveToFile(LocalFile);
    except
      on E : Exception do
      Begin
       Result:=False;
       AddLog(E.Message);
      end;
    end;
  finally
    Http.Free;
    MS.Free;
  end;
end;

Actually i'm using the TIdHTTP component for download a file from internet. i'm wondering if is possible pause and resume the download using this component o maybe another indy component.

this is my current code, this works ok for download a file (without resume), but . now i want pause the download close my app ,and when my app restart then resume the download from the last position saved.

var
  Http: TIdHTTP;
  MS  : TMemoryStream;
begin
  Result:= True;
  Http  := TIdHTTP.Create(nil);
  MS    := TMemoryStream.Create;
  try

    try
      Http.OnWork:= HttpWork;//this event give me the actual progress of the download process
      Http.Head(Url);
      FSize := Http.Response.ContentLength;
      AddLog('Downloading File '+GetURLFilename(Url)+' - '+FormatFloat('#,',FSize)+' Bytes');
      Http.Get(Url, MS);
      MS.SaveToFile(LocalFile);
    except
      on E : Exception do
      Begin
       Result:=False;
       AddLog(E.Message);
      end;
    end;
  finally
    Http.Free;
    MS.Free;
  end;
end;

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

む无字情书 2024-09-10 14:35:13

以下代码对我有用。它按块下载文件:

procedure Download(Url,LocalFile:String;
  WorkBegin:TWorkBeginEvent;Work:TWorkEvent;WorkEnd:TWorkEndEvent);
var
  Http: TIdHTTP;
  quit:Boolean;
  FLength,aRangeEnd:Integer;
begin
  Http  := TIdHTTP.Create(nil);
  fFileStream:=nil;
  try

    try
      Http.OnWork:= Work; 
      Http.OnWorkEnd := WorkEnd;

      Http.Head(Url);
      FLength := Http.Response.ContentLength;
      quit:=false;
      repeat

        if not FileExists(LocalFile) then begin
          fFileStream := TFileStream.Create(LocalFile, fmCreate);
        end
        else begin
          fFileStream := TFileStream.Create(LocalFile, fmOpenReadWrite);
          quit:= fFileStream.Size >= FLength;
          if not quit then
            fFileStream.Seek(Max(0, fFileStream.Size-4096), soFromBeginning);
        end;

        try
          aRangeEnd:=fFileStream.Size + 50000;

          if aRangeEnd < fLength then begin           
            Http.Request.Range := IntToStr(fFileStream.Position) + '-'+  IntToStr(aRangeEnd);
          end
          else begin
            Http.Request.Range := IntToStr(fFileStream.Position) + '-';
            quit:=true;
          end;

          Http.Get(Url, fFileStream);
        finally
          fFileStream.Free;
        end;
     until quit;
     Http.Disconnect;

    except
      on E : Exception do
      Begin
       //Result:=False;
       //AddLog(E.Message);
      end;
    end;
  finally
    Http.Free;
  end;
end;

the following code worked to me. It downloads the file by chunks:

procedure Download(Url,LocalFile:String;
  WorkBegin:TWorkBeginEvent;Work:TWorkEvent;WorkEnd:TWorkEndEvent);
var
  Http: TIdHTTP;
  quit:Boolean;
  FLength,aRangeEnd:Integer;
begin
  Http  := TIdHTTP.Create(nil);
  fFileStream:=nil;
  try

    try
      Http.OnWork:= Work; 
      Http.OnWorkEnd := WorkEnd;

      Http.Head(Url);
      FLength := Http.Response.ContentLength;
      quit:=false;
      repeat

        if not FileExists(LocalFile) then begin
          fFileStream := TFileStream.Create(LocalFile, fmCreate);
        end
        else begin
          fFileStream := TFileStream.Create(LocalFile, fmOpenReadWrite);
          quit:= fFileStream.Size >= FLength;
          if not quit then
            fFileStream.Seek(Max(0, fFileStream.Size-4096), soFromBeginning);
        end;

        try
          aRangeEnd:=fFileStream.Size + 50000;

          if aRangeEnd < fLength then begin           
            Http.Request.Range := IntToStr(fFileStream.Position) + '-'+  IntToStr(aRangeEnd);
          end
          else begin
            Http.Request.Range := IntToStr(fFileStream.Position) + '-';
            quit:=true;
          end;

          Http.Get(Url, fFileStream);
        finally
          fFileStream.Free;
        end;
     until quit;
     Http.Disconnect;

    except
      on E : Exception do
      Begin
       //Result:=False;
       //AddLog(E.Message);
      end;
    end;
  finally
    Http.Free;
  end;
end;
猥︴琐丶欲为 2024-09-10 14:35:13

也许 HTTP RANGE 标头可以在这里帮助您。看看 archive.org 的 http://www.west-wind.com/Weblog/posts/244.aspx 副本,了解有关恢复 HTTP 下载的更多信息:

(2004-02-07) 几天前,留言板上有人问了一个关于如何提供可断点续传的 HTTP 下载的有趣问题。我对这个问题的第一反应是,这是不可能的,因为 HTTP 是一种无状态协议,没有文件指针的概念,因此无法恢复 HTTP 下载。

然而事实证明,HTTP 1.1 确实能够通过使用从客户端发送的 Http 标头中的 Range: 标头来指定下载范围。您可以执行以下操作:

<前><代码>范围:0-10000
范围:100000-
范围:-100000

下载前 100000 字节、超过 100000 字节的所有内容或最后 100000 字节。还有更多组合,但前两个是可恢复下载感兴趣的组合。

为了演示此功能,我使用 wwHTTP(在 Web Connection/VFP 中)将文件的前 400k 块下载到带有 HTTPGetEx 的文件中,这旨在模拟中止的下载。接下来,我执行第二个请求以获取现有文件并下载其余文件:

<前><代码>#INCLUDE wconnect.h
清除
关闭数据
DO连接

本地 o 作为 wwHTTP
lcDownloadedFile = "d:\temp\wwipstuff.zip"

*** 模拟部分输出
lc输出=“”
文字=“”
tn大小 = 0
o = CREATEOBJECT("wwHTTP")
o.HttpConnect("www.west-wind.com")
? o.httpgetex("/files/wwipstuff.zip",@Text,@tnSize,"范围:字节=0-400000"+CRLF,lcDownloadedFile)
o.Httpclose()

lc 输出 = 文本
? LEN(lc 输出)

*** 计算出我们下载了多少
lnOpenAt = 文件大小(lcDownloadedFile)

*** 从此字节计数开始执行部分下载
文字=“”
tn大小=0
o = CREATEOBJECT("wwHTTP")
o.HttpConnect("www.west-wind.com")
? o.httpgetex("/files/wwipstuff.zip",@Text,@tnSize,"范围:字节=" + TRANSFORM(lnOpenAt) + "-" + CRLF)
o.Httpclose()

? LEN(文本)
*** 读取现有的部分下载并附加当前下载
lcOutput = FILETOSTR(lcDownloadedFile) + TEXT
? LEN(lc 输出)

STRTOFILE(lcOutput,lcDownloadedFile)

返回

请注意,此方法使用磁盘上的文件,因此您必须使用 HTTPGetEx(带有 Web 连接)。如果您选择,第二次下载也可以完成到磁盘,但如果您有多个中止并且需要将它们拼凑在一起,事情就会变得棘手。在这种情况下,您可能想要尝试跟踪每个文件并向其添加一个数字,然后在最后合并结果。

如果您使用 WinInet 下载到内存(这是 wwHTTP 在幕后使用的),您还可以尝试从临时 Internet 文件缓存中剥离该文件。虽然这可行,但我怀疑这个过程很快就会变得非常复杂,因此如果您计划提供恢复功能,我强烈建议您使用上述方法将输出写入文件。

此处描述了有关 WinInet 的一些附加信息以及使用此方法的一些要求:http://www.clevercomponents.com/articles/article015/resuming.asp

通过将 Range 标头添加到 wwHTTP:WebRequest.Headers 对象,可以对 .Net 的 wwHTTP 完成相同的操作。

<块引用>

Randy Pearson)假设您不知道服务器上的文件大小是多少。有没有办法找到这个问题,例如,您可以知道要请求多少块?您会先发送 HEAD 请求,还是 GET 响应的标头也会告诉您总大小?


<块引用>

Rick Strahl)您必须读取 Content-Length: 标头才能获取下载文件的大小。如果您要恢复,这应该不重要 - 您只需使用 Range: (existingsize) - 即可获得其余内容。对于大块下载,您可以读取内容长度并仅下载前 x 个字节。对于 wwHTTP,这会变得很棘手 - 您必须使用 HTTPGetEx 进行单独的调用,并将 tnBufferSize 参数设置为要检索的块大小,以使其在达到大小后停止。


<块引用>

Randy Pearson)后续:看起来兼容的服务器会向您发送足够的信息来了解大小。如果它提供块,它应该回复如下内容:

内容范围:0-10000/85432

这样您就可以(如果需要)提取它并在循环中使用它来继续智能块请求。

另请查看此处 https://forums.embarcadero.com/message.jspa?messageID=219481 对于同一主题的 TIdHTTP 相关讨论:

至少部分按照 tfilestream.seek 和偏移混淆

if FileExists(dstFile) then
开始
  Fs := TFileStream.Create(dstFile, fmOpenReadWrite);
  尝试
    Fs.Seek(Max(0, Fs.Size-1024), soFromBeginning);
    // 或者:
    // Fs.Seek(-1024, soFromEnd);
    Http.Request.Range := IntToStr(Fs.Position) + '-';
    Http.Get(Url, Fs);
  最后
    Fs.免费;
  结尾;
结尾;

Maybe the HTTP RANGE header can help you here. Have a look at archive.org's copy of http://www.west-wind.com/Weblog/posts/244.aspx for more info on resuming HTTP downloads:

(2004-02-07) A couple of days ago somebody on the Message Board asked an interesting question about how to provide resumable HTTP downloads. My first response to this question was that this isn't possible since HTTP is a stateless protocol that has no concept of file pointers and thus can't resume an HTTP download.

However it turns out HTTP 1.1 does have the ability to specify ranges in downloads by using the Range: header in the Http header sent form the client. You can do things like:

Range: 0-10000
Range: 100000-
Range: -100000

which download the first 100000 bytes, everything over 100000 bytes or the last 100000 bytes. There are more combinations but the first two are the ones that are of interest for a resumable download.

To demonstrate this feature I used wwHTTP (in Web Connection/VFP) to download a first 400k chunk of a file into a file with HTTPGetEx which is meant to simulate an aborted download. Next I do a second request to pick up the existing file and download the remainder:

#INCLUDE wconnect.h
CLEAR
CLOSE DATA
DO WCONNECT

LOCAL o as wwHTTP
lcDownloadedFile = "d:\temp\wwipstuff.zip"

*** Simulate partial output
lcOutput = ""
Text=""
tnSize = 0
o = CREATEOBJECT("wwHTTP")
o.HttpConnect("www.west-wind.com")
? o.httpgetex("/files/wwipstuff.zip",@Text,@tnSize,"Range: bytes=0-400000"+CRLF,lcDownloadedFile)
o.Httpclose()

lcOutput = Text
? LEN(lcOutput)

*** Figure out how much we downloaded
lnOpenAt = FILESIZE(lcDownloadedFile)

*** Do a partial download starting at this byte count
Text=""
tnSize =0
o = CREATEOBJECT("wwHTTP")
o.HttpConnect("www.west-wind.com")
? o.httpgetex("/files/wwipstuff.zip",@Text,@tnSize,"Range: bytes=" + TRANSFORM(lnOpenAt) + "-" + CRLF)
o.Httpclose()

? LEN(Text)
*** Read the existing partial download and append current download
lcOutput = FILETOSTR(lcDownloadedFile) + TEXT
? LEN(lcOutput)

STRTOFILE(lcOutput,lcDownloadedFile)

RETURN

Note that this approach uses a file on disk, so you have to use HTTPGetEx (with Web Connection). The second download can also be done to disk if you choose, but things will get tricky if you have multiple aborts and you need to piece them together. In that case you might want to try to keep track of each file and add a number to it, then combine the result at the very end.

If you download to memory using WinInet (which is what wwHTTP uses behind the scenes) you can also try to peel out the file from the Temporary Internet Files cache. Although this works I suspect this process will become very convoluted quickly so if you plan on providing the ability to resume I would highly recommend that you write your output to file yourself using the approach above.

Some additional information on WinInet and some of the requirements for this approach to work with it are described here: http://www.clevercomponents.com/articles/article015/resuming.asp.

The same can be done with wwHTTP for .Net by adding the Range header to the wwHTTP:WebRequest.Headers object.

(Randy Pearson) Say you don't know what the file size is at the server. Is there a way to find this out, so you can know how many chunks to request, for example? Would you send a HEAD request first, or does the header of the GET response tell you the total size also?

(Rick Strahl) You have to read the Content-Length: header to get the size of the file downloaded. If you're resuming this shouldn't matter - you just use Range: (existingsize)- to get the rest. For chunky downloads you can read the content length and only download the first x bytes. This gets tricky with wwHTTP - you have to make individual calls with HTTPGetEx and set the tnBufferSize parameter to the chunk size to retrieve to have it stop after the size is reached.

(Randy Pearson) Follow-up: It looks like a compliant server would send you enough to know the size. If it provides chunks it should reply with something like:

Content-Range: 0-10000/85432

so you could (if desired) extract that and use it in a loop to continue with intelligent chunk requests.

Also look here https://forums.embarcadero.com/message.jspa?messageID=219481 for TIdHTTP related discussion on the same topic:

(at least partly as per tfilestream.seek and offset confusion)

if FileExists(dstFile) then
begin
  Fs := TFileStream.Create(dstFile, fmOpenReadWrite);
  try
    Fs.Seek(Max(0, Fs.Size-1024), soFromBeginning);
    // alternatively:
    // Fs.Seek(-1024, soFromEnd);
    Http.Request.Range := IntToStr(Fs.Position) + '-';
    Http.Get(Url, Fs);
  finally
    Fs.Free;
  end;
end;
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文