C# 继承类Stream并且可以序列化的Stream

发布于 2024-12-04 00:56:25 字数 549 浏览 1 评论 0原文

我有一个 ASP.NET MVC 应用程序(我也使用 jQuery)。 我允许用户使用 HttpPostedFileBase 类上传文件。 然后,我使用 Stream 类型的 InputStream 属性将文件流保存到我拥有的某个数据库中,我首先在其中序列化我的对象。 Stream 是可序列化的,所以这里没有问题。

当用户不上传文件时,问题就开始了,在这种情况下我想使用我在某处拥有的另一个默认文件。 在这种情况下,一切都必须与第一种情况类似,所以最终我的数据库中将有一个 Stream 。所以我必须实例化一个 Stream 并存储它。 Stream 是抽象的,因此我无法实例 Stream。相反,我使用了 FileStream,它继承了 Stream。问题是 FileStream 不可序列化,所以这里我遇到了问题。

我该如何解决?我可以使用另一个继承 Stream 且可序列化的流吗?

I have an ASP.NET MVC application (I also use jQuery).
I allow the user to upload a file using HttpPostedFileBase class.
Then, I use the InputStream property of type Stream to save the file stream to some database I have, where I first serislize my object. Stream is seralizeable, so no problems here.

The problem begins when the user doesn't upload a file, and I want in this case to use another default file I have somewhere.
In this case, everything has to be similar to the first case, so eventually I'll have a Stream in my database. So I have to instance a Stream and store it. Stream is abstract, so I can't instance Stream. Instead I used FileStream, which inherits Stream. The problem is that FileStream is not seralizeable, so here I have a problem.

How can I solve it? Is there another stream I can use which inherits Stream and is serializeable?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

白馒头 2024-12-11 00:56:25

不要序列化流以进行存储;流是“软管”,而不是“桶”。相反,读取流并存储二进制数据(大多数数据库都有二进制数据的数据类型,例如varbinary(max))。如果这是对象模型的一部分,我会倾向于拥有一个 byte[] 属性(具有有意义的名称);它将作为模型的一部分进行简单的序列化。只需读取流即可创建byte[];工作完成了。例如:

public static byte[] ReadToEnd(this Stream s) {
    using(var ms = new MemoryStream()) {
        s.CopyTo(ms);
        return ms.ToArray();
    }
}

Don't serialize a stream for storage; a stream is a "hose", not a "bucket". Instead, read the stream and store the binary data (most databases will have a data-type for binary data, such as varbinary(max)). If this is part of an object model, I would be inclined to have a byte[] property (with a meaningful name); that will serialize trivially as part of the model. Just read the stream to create a byte[]; job done. For example:

public static byte[] ReadToEnd(this Stream s) {
    using(var ms = new MemoryStream()) {
        s.CopyTo(ms);
        return ms.ToArray();
    }
}
っ〆星空下的拥抱 2024-12-11 00:56:25

乔恩和<一个href="https://stackoverflow.com/questions/7358923/stream-that-inherits-class-stream-and-can-be-serialized-c/7358976#7358976">马克有很好的答案。我更喜欢 Marc,因为您可以选择让 SQL 层从流中按顺序读取,而不是在内存中缓冲所有数据(这可能会导致 OutOfMemoryException)。

然而,从根本上来说,你的做法是错误的。一般来说,SQL 引擎确实不喜欢处理大列值——它们的效率非常低。您可以使用另一个“数据库” - 文件系统。

因此,通常您会按如下方式定义数据库结构:

CREATE TABLE [dbo].[Files]
(
   [ID] INT IDENTITY(1,1) PRIMARY KEY NOT NULL,
   [Name] NVARCHAR(255) NOT NULL,
   [Storage] UNIQUEIDENTIFIER NOT NULL
);

在 C# 领域,您首先将文件写入磁盘,然后使用该标识符来更新数据库:

/// <summary>
/// Writes a stream to a file and returns a <see cref="Guid"/> that
/// can be used to retrieve it again.
/// </summary>
/// <param name="incomingFile">The incoming file.</param>
/// <returns>The <see cref="Guid"/> that should be used to identify the file.</returns>
public static Guid WriteFile(Stream incomingFile)
{
    var path = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData), "MyApplication");
    path = Path.Combine(path, "BinaryData");

    var guid = Guid.NewGuid();
    var ba = guid.ToByteArray();

    // Create the path for the GUID.
    path = Path.Combine(ba[0].ToString("x2"));
    path = Path.Combine(ba[1].ToString("x2"));
    path = Path.Combine(ba[2].ToString("x2"));
    Directory.CreateDirectory(path); // Always succeeds, even if the directory already exists.

    path = Path.Combine(guid.ToString() + ".dat");
    using (var fs = new FileStream(path, FileMode.Create, FileAccess.Write, FileShare.None))
    {
        var buffer = new byte[Environment.SystemPageSize];
        var length = 0;
        while ((length = incomingFile.Read(buffer, 0, buffer.Length)) != 0)
            fs.Write(buffer, 0, buffer.Length);
    }

    return guid;
}

/// <summary>
/// Deletes a file created by <see cref="WriteFile"/>.
/// </summary>
/// <param name="guid">The original <see cref="Guid"/> that was returned by <see cref="WriteFile"/>.</param>
public static void DeleteFile(Guid guid)
{
    var path = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData), "MyApplication");
    path = Path.Combine(path, "BinaryData");

    var ba = guid.ToByteArray();

    // Create the path for the GUID.
    path = Path.Combine(ba[0].ToString("x2"));
    path = Path.Combine(ba[1].ToString("x2"));
    path = Path.Combine(ba[2].ToString("x2"));
    path = Path.Combine(guid.ToString() + ".dat");
    if (File.Exists(path))
        File.Delete(path);
}

/// <summary>
/// Reads the a file that was created by <see cref="WriteFile"/>.
/// </summary>
/// <param name="guid">The original <see cref="Guid"/> that was returned by <see cref="WriteFile"/>.</param>
/// <returns>The stream that can be used to read the file.</returns>
public static Stream ReadFile(Guid guid)
{
    var path = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData), "MyApplication");
    path = Path.Combine(path, "BinaryData");

    var ba = guid.ToByteArray();

    // Create the path for the GUID.
    path = Path.Combine(ba[0].ToString("x2"));
    path = Path.Combine(ba[1].ToString("x2"));
    path = Path.Combine(ba[2].ToString("x2"));
    path = Path.Combine(guid.ToString() + ".dat");
    return new FileStream(path, FileMode.Open, FileAccess.Read, FileShare.Read);
}

您还应该调查 事务性NTFS 确保您的数据库和文件系统保持同步。在 MsSQL 中存储 BLOB 的效率低下是 Microsoft 实现 TxF 的原因之一 - 因此您应该采纳他们的建议 - 不要在 SQL 中存储 BLOB/文件

旁注:拥有嵌套文件夹(ba[0 到 2])对于性能和文件系统限制都很重要 - 单个文件夹无法容纳非常大的数字文件数量。

Both Jon and Marc have good answers. I prefer Marc's as you have the option of having the SQL layer read sequentially from the stream, instead of buffering all the data in memory (which could lead to an OutOfMemoryException).

However, at the base level you are approaching this incorrectly. SQL engines, in general, really don't like working with large column values - they are incredibly inefficient at it. There is another 'database' that you can use - your filesystem.

So typically you would define your DB structure as follows:

CREATE TABLE [dbo].[Files]
(
   [ID] INT IDENTITY(1,1) PRIMARY KEY NOT NULL,
   [Name] NVARCHAR(255) NOT NULL,
   [Storage] UNIQUEIDENTIFIER NOT NULL
);

In the C# land you would first write the file to disk, and use that identifier to update the database:

/// <summary>
/// Writes a stream to a file and returns a <see cref="Guid"/> that
/// can be used to retrieve it again.
/// </summary>
/// <param name="incomingFile">The incoming file.</param>
/// <returns>The <see cref="Guid"/> that should be used to identify the file.</returns>
public static Guid WriteFile(Stream incomingFile)
{
    var path = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData), "MyApplication");
    path = Path.Combine(path, "BinaryData");

    var guid = Guid.NewGuid();
    var ba = guid.ToByteArray();

    // Create the path for the GUID.
    path = Path.Combine(ba[0].ToString("x2"));
    path = Path.Combine(ba[1].ToString("x2"));
    path = Path.Combine(ba[2].ToString("x2"));
    Directory.CreateDirectory(path); // Always succeeds, even if the directory already exists.

    path = Path.Combine(guid.ToString() + ".dat");
    using (var fs = new FileStream(path, FileMode.Create, FileAccess.Write, FileShare.None))
    {
        var buffer = new byte[Environment.SystemPageSize];
        var length = 0;
        while ((length = incomingFile.Read(buffer, 0, buffer.Length)) != 0)
            fs.Write(buffer, 0, buffer.Length);
    }

    return guid;
}

/// <summary>
/// Deletes a file created by <see cref="WriteFile"/>.
/// </summary>
/// <param name="guid">The original <see cref="Guid"/> that was returned by <see cref="WriteFile"/>.</param>
public static void DeleteFile(Guid guid)
{
    var path = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData), "MyApplication");
    path = Path.Combine(path, "BinaryData");

    var ba = guid.ToByteArray();

    // Create the path for the GUID.
    path = Path.Combine(ba[0].ToString("x2"));
    path = Path.Combine(ba[1].ToString("x2"));
    path = Path.Combine(ba[2].ToString("x2"));
    path = Path.Combine(guid.ToString() + ".dat");
    if (File.Exists(path))
        File.Delete(path);
}

/// <summary>
/// Reads the a file that was created by <see cref="WriteFile"/>.
/// </summary>
/// <param name="guid">The original <see cref="Guid"/> that was returned by <see cref="WriteFile"/>.</param>
/// <returns>The stream that can be used to read the file.</returns>
public static Stream ReadFile(Guid guid)
{
    var path = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData), "MyApplication");
    path = Path.Combine(path, "BinaryData");

    var ba = guid.ToByteArray();

    // Create the path for the GUID.
    path = Path.Combine(ba[0].ToString("x2"));
    path = Path.Combine(ba[1].ToString("x2"));
    path = Path.Combine(ba[2].ToString("x2"));
    path = Path.Combine(guid.ToString() + ".dat");
    return new FileStream(path, FileMode.Open, FileAccess.Read, FileShare.Read);
}

You should also investigate Transactional NTFS to ensure your DB and filesystem stay in sync. The innefficiency of storing BLOB in MsSQL is one of the reasons Microsoft implemented TxF - so you should take their advice - don't store BLOB/Files in SQL.

Side Note: Having the nested folders (ba[0 through 2]) is important for both performance AND file system limitations - a single folder can't hold a really large number of files.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文