配置BerkeleydB编写和阅读过程的最佳策略

发布于 2025-02-03 19:59:01 字数 4204 浏览 2 评论 0原文

我有一个要求,在嵌入式Linux上有两个独立的过程。

其中一个流程每10-25ms每10-25ms都会采用CAN消息,并将其写入伯克利德布。 (注意:配置了该过程,使其可以跳过每个n-the消息(分辨率控制),因此DB写入不一定必须以消息的速度执行。)

其他过程从DB并进行进一步的处理 - 如果“进一步处理”成功,则处理后的消息将从数据库中删除。

当前,我的open_db函数如下:

int open_db(DB **dbpp,   /* The DB handle that we are opening */
    const char *file_name,     /* The file in which the db lives */
    const char *program_name,  /* Name of the program calling this function */
    FILE *error_file_pointer)  /* File where we want error messages sent */
{
    DB *dbp;    /* For convenience */
    u_int32_t open_flags;
    int ret;
    /* Initialize the DB handle */
    ret = db_create(&dbp, NULL, 0);
    if (ret != 0) {
        fprintf(error_file_pointer, "%s: %s\n", program_name,
                db_strerror(ret));
        return(ret);
    }

    /* Point to the memory malloc'd by db_create() */
    *dbpp = dbp;

    /* Set up error handling for this database */
    dbp->set_errfile(dbp, error_file_pointer);
    dbp->set_errpfx(dbp, program_name);

    /* Set the open flags */
    open_flags = DB_CREATE;

    /* open the database */
    ret = dbp->open(dbp,        /* Pointer to the database */
                    NULL,     
                    file_name,  /* File name */
                    NULL,      
                    DB_BTREE,   /* Database type (using btree) */
                    open_flags, /* Open flags */
                    0);     

    if (ret != 0) {
        dbp->err(dbp, ret, "Database '%s' open failed.", file_name);
        return(ret);
    }

    return (0); 
}

我的读者流程(用python编写)如下:

...
    try:
        
        cursor = self._db.cursor()
        record = cursor.first()

        
        if record is None:
            raise EmptyCanMessages('No CAN messages in DB.')


        while record:
            (id, data) = record
            
            decoded_id = str(id, 'utf-8')
            decoded_data = str(data, 'utf-8')

            try:
                # processing here

                del self._db[id]

                try:
                    self._db.sync()
                    self.log(f"Removed {decoded_id} record and synced DB", log_level=LogLevel.DEBUG)

                except Exception as e:
                    self.log(e, log_level=LogLevel.ERROR)
                
            except Exception as e:
                self.log(e, log_level=LogLevel.ERROR)
                pass

            record = cursor.next()

当前,我正在遇到一些间歇性DB问题:

can_database: BDB0689 /var/log/can.db page 12 is on free list with type 5
can_database: BDB0061 PANIC: Invalid argument
Unable to insert record bf4307a9-b0c2-42f8-b230-65aed5db75ed, err: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
can_database: BDB0060 PANIC: fatal region error detected; run recovery
Unable to sync db, err: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
...
...

一些其他有关最佳配置的详细信息,问题和指南:

我将执行db-> sync()

CAN_RECORD record;
memset(&record, 0, sizeof(CAN_RECORD));

asprintf(&record.can_data, "{\"t\": \"%s\", \"b\": \"%s\" \"}", U_UID, thing_name);
insert_record(&can_db.can_db, &record);

sync_db(&can_db.can_db);

free(record.can_data);

每次写入后,我都在每次读取后(在Python App)之后,在i del self._db [id] i self._db.sync之后, ()

  • 最好的数据库类型是什么? (当前,使用标准db_btree) - db_queuedb_recno在这里更好吗?

我想最大程度地减少内存,然后将全部写入磁盘(因为我正在使用内存约束设备)。

在我的db-> open()标志中,有一系列标志,在此处更详细地解决: https://docs.oracle.com/cd/e17076_05/html/html/api_reference/c/frame_main.main.main.html

  • 使用交易?

  • 我是否应该在每个写入和每个光标读取 +删除的情况下同步DB?这样做的更好方法是什么?

  • 有没有一种方法可以规避db-> sync()一起,并自动承诺磁盘?

我的主要要求是两个独立过程的能力a)写入同时从 +删除记录中读取

读取过程可以随时在线返回传入消息。

(注意:我将使DB的增长管理不在讨论之外,并且纯粹关注从单个DB上的两个独立过程中修改写和阅读的最佳方式。)

I have a requirement where I have two independent processes running on an embedded linux.

One of the processes takes CAN bus messages every 10 - 25ms and writes them to a BerkeleyDB.
(NOTE: the process is configured such that it can skip every n-th message (resolution control), so the DB writes don't necessarily have to execute at the speed of the messages.)

The other process reads the CAN messages from the DB and does further processing - if "further processing" is successful, the processed message is deleted from the DB.

Currently, my open_db function is as follows:

int open_db(DB **dbpp,   /* The DB handle that we are opening */
    const char *file_name,     /* The file in which the db lives */
    const char *program_name,  /* Name of the program calling this function */
    FILE *error_file_pointer)  /* File where we want error messages sent */
{
    DB *dbp;    /* For convenience */
    u_int32_t open_flags;
    int ret;
    /* Initialize the DB handle */
    ret = db_create(&dbp, NULL, 0);
    if (ret != 0) {
        fprintf(error_file_pointer, "%s: %s\n", program_name,
                db_strerror(ret));
        return(ret);
    }

    /* Point to the memory malloc'd by db_create() */
    *dbpp = dbp;

    /* Set up error handling for this database */
    dbp->set_errfile(dbp, error_file_pointer);
    dbp->set_errpfx(dbp, program_name);

    /* Set the open flags */
    open_flags = DB_CREATE;

    /* open the database */
    ret = dbp->open(dbp,        /* Pointer to the database */
                    NULL,     
                    file_name,  /* File name */
                    NULL,      
                    DB_BTREE,   /* Database type (using btree) */
                    open_flags, /* Open flags */
                    0);     

    if (ret != 0) {
        dbp->err(dbp, ret, "Database '%s' open failed.", file_name);
        return(ret);
    }

    return (0); 
}

My reader process (written in Python) is as follows:

...
    try:
        
        cursor = self._db.cursor()
        record = cursor.first()

        
        if record is None:
            raise EmptyCanMessages('No CAN messages in DB.')


        while record:
            (id, data) = record
            
            decoded_id = str(id, 'utf-8')
            decoded_data = str(data, 'utf-8')

            try:
                # processing here

                del self._db[id]

                try:
                    self._db.sync()
                    self.log(f"Removed {decoded_id} record and synced DB", log_level=LogLevel.DEBUG)

                except Exception as e:
                    self.log(e, log_level=LogLevel.ERROR)
                
            except Exception as e:
                self.log(e, log_level=LogLevel.ERROR)
                pass

            record = cursor.next()

Currently, I am experiencing a few intermittent DB issues:

can_database: BDB0689 /var/log/can.db page 12 is on free list with type 5
can_database: BDB0061 PANIC: Invalid argument
Unable to insert record bf4307a9-b0c2-42f8-b230-65aed5db75ed, err: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
can_database: BDB0060 PANIC: fatal region error detected; run recovery
Unable to sync db, err: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
...
...

Some additional details, questions and guidance on optimal configuration:

After every write, I am doing a DB->sync()

CAN_RECORD record;
memset(&record, 0, sizeof(CAN_RECORD));

asprintf(&record.can_data, "{\"t\": \"%s\", \"b\": \"%s\" \"}", U_UID, thing_name);
insert_record(&can_db.can_db, &record);

sync_db(&can_db.can_db);

free(record.can_data);

After every read (in the Python app), after I del self._db[id] I self._db.sync()

  • What is the best database type? (currently, using the standard DB_BTREE) - would DB_QUEUE or DB_RECNO be better here?

I want to minimize in-memory, and just write all to disk (since I am working on a memory-constrained device).

in my db->open() flags, there is a range of flags, addressed in more detail here: https://docs.oracle.com/cd/E17076_05/html/api_reference/C/frame_main.html

  • Should I be using transactions?

  • Should I be syncing DB every write and every cursor read + delete? What is the better way to do this?

  • Is there a way to circumvent db->sync() all together and automatically commit to disk?

My main requirement is the ability of two independent processes to a) write to, and read from + delete record(s) as they are processed simultaneously.

The read process can continue to process the incoming messages anytime it comes back online.

(NOTE: I will keep the growth management of the DB out of this discussion, and purely focus on the optimal way of configuring for write and read with modify from two independent processes on a single DB.)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文