不再让梦枯萎

文章 评论 浏览 31

不再让梦枯萎 2025-02-20 22:04:24

解决方案

在深入浏览浏览量类之后,我只是尝试了每个构造函数属性(T_T)并获得了解决方案。

padends:false

将其设置在pageview的构造函数中,

PageView.builder(
   itemCount: 2,
   padEnds: false,
   pageSnapping: true
   itemBuilder: (context, pagePosition) {
      return Container();
   },
)

如果有人有其他解决方案,您可以告诉我吗? :)

Solution

After I take deep inside PageView class, I just tried every constructor properties ( T_T ) and got solution.

padEnds: false

set it in constructor of PageView

PageView.builder(
   itemCount: 2,
   padEnds: false,
   pageSnapping: true
   itemBuilder: (context, pagePosition) {
      return Container();
   },
)

If anyone has another solution, you can tell me? :)

如何删除PageView在扑朔迷离中的领先空间?

不再让梦枯萎 2025-02-20 19:14:46

一种减轻我发现成功的问题的方法 - 使用Firestore GetDocs呼叫提前加载数据。通话结束后,请像平常一样拨打您的onsnapshot通话。快照侦听器完成设置后,您的初始获取可能不会发生太大的变化。以下是您可以实现此解决方案的方式。

export default function Home() {
  const { logout, currentUser } = useAuth();
  const [products, setProducts] = useState([]);
  const [lastVisible, setLastVisible] = useState({});
  const [loading, setLoading] = useState(true);
  const [loadingMore, setLoadingMore] = useState(false);
  const [unsubListenerFunctions, setUnsubListenerFunctions] = useState([]);
  const [showGoToTop, setShowGoToTop] = useState(false);

  const [initialProductsLoaded, setInitialProductsLoaded] = useState(false);

  useEffect(() => {
    window.addEventListener('scroll', handleScroll, { passive: true });

    return () => {
      window.removeEventListener('scroll', handleScroll);
    };
  }, []);



  useEffect(() => {
      getDocs(query(
        collection(db, 'products'),
        orderBy('productName', 'asc'),
        limit(6)
      )).then((querySnapshot) => {
        setProducts(snapshot.docs);
        setLastVisible(snapshot.docs[snapshot.docs.length - 1]);
        loading && setLoading(false);
        setInitialProductsLoaded(true);
      });
  }, []);

  useEffect(() => {
    if (initialProductsLoaded) {
    const unsubscribe = onSnapshot(
      query(
        collection(db, 'products'),
        orderBy('productName', 'asc'),
        limit(6)
      ),
      snapshot => {
        setProducts(snapshot.docs);
        setLastVisible(snapshot.docs[snapshot.docs.length - 1]);
      }
    );

    setUnsubListenerFunctions([unsubscribe]);

    return () => {
      unsubListenerFunctions.forEach(unsub => unsub());
    };
    }

  }, [initialProductsLoaded]);
 
  useEffect(() => {
    if (loadingMore && lastVisible) {
      const unsubscribe = onSnapshot(
        query(
          collection(db, 'products'),
          orderBy('productName', 'asc'),
          startAfter(lastVisible),
          limit(2)
        ),
        snapshot => {
          setProducts(prev => prev.concat(snapshot.docs));
          setLastVisible(snapshot.docs[snapshot.docs.length - 1]);
          setLoadingMore(false);
        }
      );

      setUnsubListenerFunctions(prev => [...prev, unsubscribe]);
    } else setLoadingMore(false);
  }, [loadingMore]);

  const handleScroll = e => {
    if (e.target.scrollingElement.scrollTop > 200) {
      setShowGoToTop(true);
    } else {
      setShowGoToTop(false);
    }

    if (loadingMore) return;

    const bottomReached =
      e.target.scrollingElement.scrollHeight -
        e.target.scrollingElement.scrollTop <=
      e.target.scrollingElement.clientHeight + 100;

    if (bottomReached) {
      setLoadingMore(true);
    }
  };

  return (
    <div className="" onScroll={handleScroll}>

        ...

    </div>
  );
}

在此解决方案之前,我几乎不幸地设置了多个快照听众。除了一位听众以外的所有侦听器外,还给了我非常快速的负载的结果,但是一旦我有一个以上的听众,它将导致7-10秒的加载时间。希望这很有帮助!

One way of mitigating this problem that I found to be successful - Load the data up front using a firestore getDocs call. Once that call has finished, make your onSnapshot call as you normally would. Once the snapshot listener is finished setting up, there likely won't be much of a change in data from your initial fetch. Below is how you might implement this solution.

export default function Home() {
  const { logout, currentUser } = useAuth();
  const [products, setProducts] = useState([]);
  const [lastVisible, setLastVisible] = useState({});
  const [loading, setLoading] = useState(true);
  const [loadingMore, setLoadingMore] = useState(false);
  const [unsubListenerFunctions, setUnsubListenerFunctions] = useState([]);
  const [showGoToTop, setShowGoToTop] = useState(false);

  const [initialProductsLoaded, setInitialProductsLoaded] = useState(false);

  useEffect(() => {
    window.addEventListener('scroll', handleScroll, { passive: true });

    return () => {
      window.removeEventListener('scroll', handleScroll);
    };
  }, []);



  useEffect(() => {
      getDocs(query(
        collection(db, 'products'),
        orderBy('productName', 'asc'),
        limit(6)
      )).then((querySnapshot) => {
        setProducts(snapshot.docs);
        setLastVisible(snapshot.docs[snapshot.docs.length - 1]);
        loading && setLoading(false);
        setInitialProductsLoaded(true);
      });
  }, []);

  useEffect(() => {
    if (initialProductsLoaded) {
    const unsubscribe = onSnapshot(
      query(
        collection(db, 'products'),
        orderBy('productName', 'asc'),
        limit(6)
      ),
      snapshot => {
        setProducts(snapshot.docs);
        setLastVisible(snapshot.docs[snapshot.docs.length - 1]);
      }
    );

    setUnsubListenerFunctions([unsubscribe]);

    return () => {
      unsubListenerFunctions.forEach(unsub => unsub());
    };
    }

  }, [initialProductsLoaded]);
 
  useEffect(() => {
    if (loadingMore && lastVisible) {
      const unsubscribe = onSnapshot(
        query(
          collection(db, 'products'),
          orderBy('productName', 'asc'),
          startAfter(lastVisible),
          limit(2)
        ),
        snapshot => {
          setProducts(prev => prev.concat(snapshot.docs));
          setLastVisible(snapshot.docs[snapshot.docs.length - 1]);
          setLoadingMore(false);
        }
      );

      setUnsubListenerFunctions(prev => [...prev, unsubscribe]);
    } else setLoadingMore(false);
  }, [loadingMore]);

  const handleScroll = e => {
    if (e.target.scrollingElement.scrollTop > 200) {
      setShowGoToTop(true);
    } else {
      setShowGoToTop(false);
    }

    if (loadingMore) return;

    const bottomReached =
      e.target.scrollingElement.scrollHeight -
        e.target.scrollingElement.scrollTop <=
      e.target.scrollingElement.clientHeight + 100;

    if (bottomReached) {
      setLoadingMore(true);
    }
  };

  return (
    <div className="" onScroll={handleScroll}>

        ...

    </div>
  );
}

Before this solution, I had very little luck with setting up multiple snapshot listeners at once. Commenting out all but one listener gave me the result of a very quick load, but as soon as I had more than one listener, it would result in a 7-10 second load time. Hope this is helpful!

在Firestore中使用多个快照侦听器的正确方法是什么,以进行懒惰的加载和无限滚动(Firebase&#x2B; react)?

不再让梦枯萎 2025-02-20 18:53:52

在不编写每个条件的情况下,实际上没有一种方法可以使用案例语句来完成您要完成的工作。

如果您的最终目标是将列值更改为其他物品的映射,那可能会改变或大量项目;为了简化维护,您需要使用另一个表,然后需要加入或使用子问题。

这里有两个带有测试数据的工作示例,以显示它们的工作方式:

左JOIN:

#CTEs with test data
with convert_data as (
    select *
    From (
            values('data1', 'd1'),
                ('data2', 'd2')
        ) as convertdata (ddatavalue, datareplace)
),
some_table as(
    select *
    From (
            values('1', '1', 'data1'),
                ('2', '2', 'data2'),
                ('3', '3', 'data3')
        ) as sametable (col1, col2, ddata)
)
#query showing how to do the join
select col1,
    col2,
    coalesce(datareplace, ddata) as converted_data
from some_table st
    left join convert_data cd on cd.ddatavalue = st.ddata

Sub-Query:

#CTEs with test data
with convert_data as (
    select *
    From (
            values('data1', 'd1'),
                ('data2', 'd2')
        ) as convertdata (ddatavalue, datareplace)
),
some_table as(
    select *
    From (
            values('1', '1', 'data1'),
                ('2', '2', 'data2'),
                ('3', '3', 'data3')
        ) as sametable (col1, col2, ddata)
)
#query showing how to do the sub-query
select col1,
    col2,
    coalesce(
        (
            select cd.datareplace
            from convert_data cd
            where cd.ddatavalue = st.ddata
        ),
        ddata
    ) as converted_data
from some_table st

哪个将比另一个更好地表现,这实际上将取决于您的数据,结构和数据文件格式。测试两者,看看哪个对您有利。

There really is not a way to use a case statement without writing each condition to do what you are trying to accomplish.

If your end goal is a mapping to change the value of a column to something else and that could change or is a large number of items; for easy maintenance you would want to use another table which then is going to require a join or the use of a sub-query.

Here are 2 working examples with test data to show how those work:

Left Join:

#CTEs with test data
with convert_data as (
    select *
    From (
            values('data1', 'd1'),
                ('data2', 'd2')
        ) as convertdata (ddatavalue, datareplace)
),
some_table as(
    select *
    From (
            values('1', '1', 'data1'),
                ('2', '2', 'data2'),
                ('3', '3', 'data3')
        ) as sametable (col1, col2, ddata)
)
#query showing how to do the join
select col1,
    col2,
    coalesce(datareplace, ddata) as converted_data
from some_table st
    left join convert_data cd on cd.ddatavalue = st.ddata

Sub-query:

#CTEs with test data
with convert_data as (
    select *
    From (
            values('data1', 'd1'),
                ('data2', 'd2')
        ) as convertdata (ddatavalue, datareplace)
),
some_table as(
    select *
    From (
            values('1', '1', 'data1'),
                ('2', '2', 'data2'),
                ('3', '3', 'data3')
        ) as sametable (col1, col2, ddata)
)
#query showing how to do the sub-query
select col1,
    col2,
    coalesce(
        (
            select cd.datareplace
            from convert_data cd
            where cd.ddatavalue = st.ddata
        ),
        ddata
    ) as converted_data
from some_table st

Which one will perform better over the other will really be dependent on your data, structure and data file format. Test both and see which one works better for you.

雅典娜:将列转换为不同的值

不再让梦枯萎 2025-02-20 04:31:26

由于您只需要 numbers (而不是任何其他数据),请缩短查询,以便它仅搜索student_courses表:表:

SQL> with temp as
  2    (select student_id,
  3            count(course_id) cnt
  4     from student_courses
  5     group by student_id
  6    )
  7  select
  8    sum(case when cnt <  4 then 1 else 0 end) part_time,
  9    sum(case when cnt >= 4 then 1 else 0 end) full_time
 10  from temp;

 PART_TIME  FULL_TIME
---------- ----------
         6          2

SQL>

As you only need the numbers (and not any other data), shorten the query so that it searches only the student_courses table:

SQL> with temp as
  2    (select student_id,
  3            count(course_id) cnt
  4     from student_courses
  5     group by student_id
  6    )
  7  select
  8    sum(case when cnt <  4 then 1 else 0 end) part_time,
  9    sum(case when cnt >= 4 then 1 else 0 end) full_time
 10  from temp;

 PART_TIME  FULL_TIME
---------- ----------
         6          2

SQL>

全日制身份和兼职学生

不再让梦枯萎 2025-02-20 03:21:13

您可以

version: "2"
services:
  minio:
    image: minio/minio
    ports:
      - "9000:9000"
    volumes:
      - ./test/.minio/data:/export
      - ./test/.minio/config:/root/.minio
    environment:
      - "MINIO_ACCESS_KEY=AKIAIOSFODNN7EXAMPLE"
      - "MINIO_SECRET_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
    command: server /export

  createbuckets:
    image: minio/mc
    depends_on:
      - minio
    volumes:
      - ./my-data:/tmp/data
    entrypoint: >
      /bin/sh -c "
      /usr/bin/mc config host add myminio http://minio:9000 AKIAIOSFODNN7EXAMPLE wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY;
      /usr/bin/mc rm -r --force myminio/somebucketname;
      /usr/bin/mc mb myminio/somebucketname;
      /usr/bin/mc policy download myminio/somebucketname;
      /usr/bin/mc cp /tmp/data myminio/somebucketname;
      exit 0;
      "

通过此类github问题的启发: https://github.com/minio/minio/issues/4769#issuecomment-320319655

You can do it with extra service like this config:

version: "2"
services:
  minio:
    image: minio/minio
    ports:
      - "9000:9000"
    volumes:
      - ./test/.minio/data:/export
      - ./test/.minio/config:/root/.minio
    environment:
      - "MINIO_ACCESS_KEY=AKIAIOSFODNN7EXAMPLE"
      - "MINIO_SECRET_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
    command: server /export

  createbuckets:
    image: minio/mc
    depends_on:
      - minio
    volumes:
      - ./my-data:/tmp/data
    entrypoint: >
      /bin/sh -c "
      /usr/bin/mc config host add myminio http://minio:9000 AKIAIOSFODNN7EXAMPLE wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY;
      /usr/bin/mc rm -r --force myminio/somebucketname;
      /usr/bin/mc mb myminio/somebucketname;
      /usr/bin/mc policy download myminio/somebucketname;
      /usr/bin/mc cp /tmp/data myminio/somebucketname;
      exit 0;
      "

Inspired by this Github issue: https://github.com/minio/minio/issues/4769#issuecomment-320319655

Minio Docker容器不显示本地文件夹的文件

不再让梦枯萎 2025-02-19 16:01:39

使用存在而不是中的,因为存在关键字evaluates true 或false,它将更有效。,但是关键字中的将比较相应的子列列中的所有值。如果您在运算符中使用,则SQL引擎将扫描从内部查询中获取的所有记录。另一方面,如果我们使用存在,SQL引擎将在找到匹配项后立即停止扫描过程。

使用查询看起来像

SELECT [date],col1, col2
FROM @demo d
WHERE EXISTS ( 
    SELECT 1 FROM @demo t
    WHERE t.col1 = d.col2
)

It would be more efficient to use EXISTS instead of IN as the EXISTS keyword evaluates TRUE or FALSE, but the IN keyword will compare all values in the corresponding subuery column. If you are using the IN operator, the SQL engine will scan all records fetched from the inner query. On the other hand, if we are using EXISTS, the SQL engine will stop the scanning process as soon as it found a match.

Query with EXISTS looks like

SELECT [date],col1, col2
FROM @demo d
WHERE EXISTS ( 
    SELECT 1 FROM @demo t
    WHERE t.col1 = d.col2
)

如何过滤行,如果COL2剂量中的值不存在于Col1中

不再让梦枯萎 2025-02-19 09:29:13

如果要使用纯React,则可能需要使用诸如Create React App或Vite + React之类的内容。您可以将React-Markdown的宣传库删除到解析博客文章。

我还建议使用astro( https://astro.build/ )本地。

If you want to use pure React, you might want to use something like Create React App, or Vite + React. You can pull in a Markdown library like react-markdown to parse blog posts.

I would also recommend using something like Astro (https://astro.build/), which can parse markdown and React natively.

我想在Pure React JS和Nodejs(Express,MongoDB)中创建一个博客。我也可以邮寄代码片段和标题,而不是简单的文本

不再让梦枯萎 2025-02-19 00:44:08

您可以尝试str.Stract然后地图

import re
c = '('+'|'.join(df1.Name.tolist())+')'

df2['new'] = df2.Description.str.extract(c,flags=re.IGNORECASE)[0].str.upper().\
                  map(dict(zip(df1.Name.str.upper(),df1.Category)))

0        Fruit
1    Vegetable
2        Fruit
Name: 0, dtype: object

You can try str.extract then map

import re
c = '('+'|'.join(df1.Name.tolist())+')'

df2['new'] = df2.Description.str.extract(c,flags=re.IGNORECASE)[0].str.upper().\
                  map(dict(zip(df1.Name.str.upper(),df1.Category)))

0        Fruit
1    Vegetable
2        Fruit
Name: 0, dtype: object

如何将数据帧列中的字符串与另一个数据框中的子字符串进行比较并提取值

不再让梦枯萎 2025-02-18 12:34:21

也许您的MIB已过时。

snmptranslate -IR -Td -OS PAN-COMMON-MIB::panCommonObjs.7.4.4.1.6.6
PAN-COMMON-MIB::panDeviceLoggingExtFwdStatsTable1minAvgSendRate.6
panDeviceLoggingExtFwdStatsTable1minAvgSendRate OBJECT-TYPE
  -- FROM   PAN-COMMON-MIB
  SYNTAX    Unsigned32
  MAX-ACCESS    read-only
  STATUS    current
  DESCRIPTION   "Counter for average send rate over 1 minute interval."
::= { iso(1) org(3) dod(6) internet(1) private(4) enterprises(1) panRoot(25461) panMibs(2) panCommonMib(1) panCommonObjs(2) panDeviceLogging(7) panDeviceLoggingExtFwd(4) panDeviceLoggingExtFwdStatsTable(4) panDeviceLoggingExtFwdStatsEntry(1) panDeviceLoggingExtFwdStatsTable1minAvgSendRate(6) 6 }

我从 github

Maybe your MIBs are outdated.

snmptranslate -IR -Td -OS PAN-COMMON-MIB::panCommonObjs.7.4.4.1.6.6
PAN-COMMON-MIB::panDeviceLoggingExtFwdStatsTable1minAvgSendRate.6
panDeviceLoggingExtFwdStatsTable1minAvgSendRate OBJECT-TYPE
  -- FROM   PAN-COMMON-MIB
  SYNTAX    Unsigned32
  MAX-ACCESS    read-only
  STATUS    current
  DESCRIPTION   "Counter for average send rate over 1 minute interval."
::= { iso(1) org(3) dod(6) internet(1) private(4) enterprises(1) panRoot(25461) panMibs(2) panCommonMib(1) panCommonObjs(2) panDeviceLogging(7) panDeviceLoggingExtFwd(4) panDeviceLoggingExtFwdStatsTable(4) panDeviceLoggingExtFwdStatsEntry(1) panDeviceLoggingExtFwdStatsTable1minAvgSendRate(6) 6 }

I got them from github

SNMP翻译成常见的OBJS?

不再让梦枯萎 2025-02-18 07:35:07

我会在这里使用正则表达式:

SELECT *
FROM yourTable
WHERE ibnaccount ~* '[^A-Z0-9]' AND  -- matches non alphanumeric character
      LENGTH(ibanaccount) <> 18 AND  -- length is not 18
      ibnaccount ~ '[^A-Z]';         -- at least one non uppercase letter

I would use a regular expressions here:

SELECT *
FROM yourTable
WHERE ibnaccount ~* '[^A-Z0-9]' AND  -- matches non alphanumeric character
      LENGTH(ibanaccount) <> 18 AND  -- length is not 18
      ibnaccount ~ '[^A-Z]';         -- at least one non uppercase letter

PostgreSQL:不完全字母数字,不完全使用大写字母,长度不同于18

不再让梦枯萎 2025-02-17 20:09:29

以下使用XML linq并将结果放入数据表中

using System;
using System.Linq;
using System.Text;
using System.Collections;
using System.Collections.Generic;
using System.Xml;
using System.Xml.Linq;
using System.Data;

namespace ConsoleApp2
{
    class Program
    {
        const string FILENAME = @"c:\temp\test.xml";
        static void Main(string[] args)
        {
            XDocument doc = XDocument.Load(FILENAME);
            XElement ratedCurrent = doc.Descendants("RATED_CURRENT").FirstOrDefault();
            string[] children = ratedCurrent.Elements().Select(x => x.Name.LocalName).ToArray();


            DataTable dt = new DataTable();
            dt.Columns.Add("RATED_CURRENT", typeof(string));
            foreach(string child in children)
            {
                dt.Columns.Add(child, typeof(string));
                dt.Columns.Add(child + "_COUNT", typeof(string));
            }

            foreach(XElement rCurrent in doc.Descendants("RATED_CURRENT"))
            {
                DataRow row = dt.Rows.Add();
                row["RATED_CURRENT"] = int.Parse(rCurrent.FirstNode.ToString());
                foreach(XElement child in rCurrent.Elements())
                {
                    string columnName = child.Name.LocalName;
                    int value = int.Parse(child.FirstNode.ToString());
                    int count = (int)child.Element("SAMPLE_COUNT");
                    row[columnName] = value;
                    row[columnName + "_COUNT"] = count;
                }
            }



        }
 
    }


}

The following uses XML Linq and puts results into a DataTable

using System;
using System.Linq;
using System.Text;
using System.Collections;
using System.Collections.Generic;
using System.Xml;
using System.Xml.Linq;
using System.Data;

namespace ConsoleApp2
{
    class Program
    {
        const string FILENAME = @"c:\temp\test.xml";
        static void Main(string[] args)
        {
            XDocument doc = XDocument.Load(FILENAME);
            XElement ratedCurrent = doc.Descendants("RATED_CURRENT").FirstOrDefault();
            string[] children = ratedCurrent.Elements().Select(x => x.Name.LocalName).ToArray();


            DataTable dt = new DataTable();
            dt.Columns.Add("RATED_CURRENT", typeof(string));
            foreach(string child in children)
            {
                dt.Columns.Add(child, typeof(string));
                dt.Columns.Add(child + "_COUNT", typeof(string));
            }

            foreach(XElement rCurrent in doc.Descendants("RATED_CURRENT"))
            {
                DataRow row = dt.Rows.Add();
                row["RATED_CURRENT"] = int.Parse(rCurrent.FirstNode.ToString());
                foreach(XElement child in rCurrent.Elements())
                {
                    string columnName = child.Name.LocalName;
                    int value = int.Parse(child.FirstNode.ToString());
                    int count = (int)child.Element("SAMPLE_COUNT");
                    row[columnName] = value;
                    row[columnName + "_COUNT"] = count;
                }
            }



        }
 
    }


}

使用System.xml读取XML

不再让梦枯萎 2025-02-17 13:25:14

无需使用堆分配来延迟绑定以进行工作:

#include <iostream>

struct Base {
  virtual void print() { std::cout << "Base\n"; }
};

struct Derived : public Base {
  void print() override { std::cout << "Derived\n"; }
};

struct Composed {
  Base b;
  Derived d;
};

int main() {
  Base b;
  Derived d;
  Composed c;

  Base &bb{b}, &bd{d}, &cbb{c.b}, &cbd{c.d};

  bb.print();   // Base
  bd.print();   // Derived
  cbb.print();  // Base
  cbd.print();  // Derived
}

在上面的示例中,没有进行堆分配,所有参考变量均为type base&amp;。较晚的装订效果很好。

There is no need to use heap allocation for late binding to work:

#include <iostream>

struct Base {
  virtual void print() { std::cout << "Base\n"; }
};

struct Derived : public Base {
  void print() override { std::cout << "Derived\n"; }
};

struct Composed {
  Base b;
  Derived d;
};

int main() {
  Base b;
  Derived d;
  Composed c;

  Base &bb{b}, &bd{d}, &cbb{c.b}, &cbd{c.d};

  bb.print();   // Base
  bd.print();   // Derived
  cbb.print();  // Base
  cbd.print();  // Derived
}

In the example above, no heap allocation takes place and all reference variables are of type Base&. Late binding works just fine.

可以在C&#x2B;&#x2B;可以延迟结合。使用构图时无需堆内存?

不再让梦枯萎 2025-02-17 10:44:12

我只是为了娱乐而改进了代码。至于您的问题的答案,它很容易从数字1到编号2-因为这些试图将其除以所有数字低于其的数字。如果找不到(不是1),那是一个素数。

const number1 = parseInt(prompt("Enter the lower number"), 10);
const number2 = parseInt(prompt("Enter the higher number"), 10);

console.log(`The prime numbers between ${number1} and ${number2} are: `);

for (let i = number1; i <= number2; i++) {
  let flag = 0;
  for (let j = 2; j <= Math.sqrt(i); j++) {
    if (i % j == 0) {
      flag = 1;
      break;
    }
  }
  if (i > 1 && flag == 0) console.log(i);
}

I Improved the code just for fun. As for the answer to your question, it's easily running from number1 to number2 - foreach of those tries to divide it by all numbers lower than it. if none found (and it's not 1) then it is a prime number.

const number1 = parseInt(prompt("Enter the lower number"), 10);
const number2 = parseInt(prompt("Enter the higher number"), 10);

console.log(`The prime numbers between ${number1} and ${number2} are: `);

for (let i = number1; i <= number2; i++) {
  let flag = 0;
  for (let j = 2; j <= Math.sqrt(i); j++) {
    if (i % j == 0) {
      flag = 1;
      break;
    }
  }
  if (i > 1 && flag == 0) console.log(i);
}

嵌套循环的JavaScript-想逐步了解计算

不再让梦枯萎 2025-02-16 22:26:50

搜索几个小时后,我找到了我一年过去了的示例文件。

警告:

  • 2D矩阵
  • 解决方案仅覆盖不适合3维或通用ndarrays的

,将numpy数组写入ASCII文件,并用标头指定nrows,ncols,ncols:

def write_matrix2D_to_ascii(filename, matrix2D):
    
    nrows, ncols = matrix2D.shape
    
    with open(filename, "w") as file:
        
        # write header [rows x cols]
        nrows, ncols = matrix2D.shape
        file.write(f"{nrows} {ncols}")
        file.write("\n")
        
        # write values 
        for row in range(nrows):
            for col in range(ncols):
                value = matrix2D[row, col]

                file.write(str(value))
                file.write(" ")
            file.write("\n")

示例输出data-file.txt,看起来像这样(第一行是header指定NROWS和NCOLS):

2 3
1.0 2.0 3.0
4.0 5.0 6.0

CPP):CPP ):CPP):CPP):CPP函数将ASCII文件中的矩阵读取到OPENCV矩阵:

#include <iostream>
#include <fstream>
#include <iomanip> // set precision of output string

#include <opencv2/core/core.hpp> // OpenCV matrices for storing data

using namespace std;
using namespace cv;

void readMatAsciiWithHeader( const string& filename, Mat& matData)
{
    cout << "Create matrix from file :" << filename << endl;

    ifstream inFileStream(filename.c_str());
    if(!inFileStream){
        cout << "File cannot be found" << endl;
        exit(-1);
    }

    int rows, cols;
    inFileStream >> rows;
    inFileStream >> cols;
    matData.create(rows,cols,CV_32F);
    cout << "numRows: " << rows << "\t numCols: " << cols << endl;

    matData.setTo(0);  // init all values to 0
    float *dptr;
    for(int ridx=0; ridx < matData.rows; ++ridx){
        dptr = matData.ptr<float>(ridx);
        for(int cidx=0; cidx < matData.cols; ++cidx, ++dptr){
            inFileStream >> *dptr;
        }
    }
    inFileStream.close();

}

在CPP程序中使用上述函数的驱动程序代码:

Mat myMatrix;
readMatAsciiWithHeader("path/to/data-file.txt", myMatrix);

完整性,一些代码使用C ++保存数据:

int saveMatAsciiWithHeader( const string& filename, Mat& matData)
{
    if (matData.empty()){
       cout << "File could not be saved. MatData is empty" << endl;
       return 0;
    }
    ofstream oStream(filename.c_str());

    // Create header
    oStream << matData.rows << " " << matData.cols << endl;
    
    // Write data
    for(int ridx=0; ridx < matData.rows; ridx++)
    {
        for(int cidx=0; cidx < matData.cols; cidx++)
        {
            oStream << setprecision(9) << matData.at<float>(ridx,cidx) << " ";
        }
        oStream << endl;
    }
    oStream.close();
    cout << "Saved " << filename.c_str() << endl;

    return 1;
}

未来工作:未来的工作:

  • 的解决方案
  • 3D矩阵转换为EIGEN :: MATRIX

After searching for hours I found my year-old example files.

Caveat:

  • solution only covers 2D matrices
  • not suited for 3 dimensional or generic ndarrays

Write numpy array to ascii file with header specifying nrows, ncols:

def write_matrix2D_to_ascii(filename, matrix2D):
    
    nrows, ncols = matrix2D.shape
    
    with open(filename, "w") as file:
        
        # write header [rows x cols]
        nrows, ncols = matrix2D.shape
        file.write(f"{nrows} {ncols}")
        file.write("\n")
        
        # write values 
        for row in range(nrows):
            for col in range(ncols):
                value = matrix2D[row, col]

                file.write(str(value))
                file.write(" ")
            file.write("\n")

Example output data-file.txt looks like this (first row is header specifying nrows and ncols):

2 3
1.0 2.0 3.0
4.0 5.0 6.0

Cpp function to read matrix from ascii file into OpenCV matrix:

#include <iostream>
#include <fstream>
#include <iomanip> // set precision of output string

#include <opencv2/core/core.hpp> // OpenCV matrices for storing data

using namespace std;
using namespace cv;

void readMatAsciiWithHeader( const string& filename, Mat& matData)
{
    cout << "Create matrix from file :" << filename << endl;

    ifstream inFileStream(filename.c_str());
    if(!inFileStream){
        cout << "File cannot be found" << endl;
        exit(-1);
    }

    int rows, cols;
    inFileStream >> rows;
    inFileStream >> cols;
    matData.create(rows,cols,CV_32F);
    cout << "numRows: " << rows << "\t numCols: " << cols << endl;

    matData.setTo(0);  // init all values to 0
    float *dptr;
    for(int ridx=0; ridx < matData.rows; ++ridx){
        dptr = matData.ptr<float>(ridx);
        for(int cidx=0; cidx < matData.cols; ++cidx, ++dptr){
            inFileStream >> *dptr;
        }
    }
    inFileStream.close();

}

Driver code to use above mentioned function in cpp program:

Mat myMatrix;
readMatAsciiWithHeader("path/to/data-file.txt", myMatrix);

For completeness, some code to save the data using C++:

int saveMatAsciiWithHeader( const string& filename, Mat& matData)
{
    if (matData.empty()){
       cout << "File could not be saved. MatData is empty" << endl;
       return 0;
    }
    ofstream oStream(filename.c_str());

    // Create header
    oStream << matData.rows << " " << matData.cols << endl;
    
    // Write data
    for(int ridx=0; ridx < matData.rows; ridx++)
    {
        for(int cidx=0; cidx < matData.cols; cidx++)
        {
            oStream << setprecision(9) << matData.at<float>(ridx,cidx) << " ";
        }
        oStream << endl;
    }
    oStream.close();
    cout << "Saved " << filename.c_str() << endl;

    return 1;
}

Future work:

  • solution for 3D matrices
  • conversion to Eigen::Matrix

保存数据(矩阵或ndarray),然后将其加载到C&#x2B;&#x2B; (作为OpenCV垫子)

不再让梦枯萎 2025-02-16 19:34:10

我刚刚从GitHub Depentabot那里收到了漏洞,并通过以下方式解决了:

  1. 它看起来像是Nodemon的嵌套依赖。 https://github.com/remy/nodemon/nodemon/issues/2023 通过消除其依赖性来解决。
  2. 它是针对Nodemon的,它在开发过程中不在生产期间运行,所以您可以,我确实忽略了它,因为它不是易受伤害的代码:d。
  3. 其他选项 - 也许为您的软件包设置了一个替代。还是等到Nodemon下一个更新?

注意:有时NPM审核修复程序没有任何作用,我一直认为这是因为它不知道如何修复它,例如在Nodemon中,这是一个嵌套的依赖性,所以可能会挣扎? NPM审核修复有时可以通过升级来解决一些问题,但这会制动其他问题,因此我对此没有100%的信心。 (没有错误或文章可以支持这一点,只是轶事证据)。

I just received the got vulnerability from github dependabot, and resolved it by:

  1. It looks like it's a nested dependency of nodemon. https://github.com/remy/nodemon/issues/2023 Which they are going to fix by removing their dependency.
  2. It's for nodemon, which is run during dev not on production, so you could and I did ignore it as it's not vulnerable code :D.
  3. Other option - maybe set up an override for got in your package.json? or wait till nodemon next update?

Note: Sometimes npm audit fix does nothing, I always assumed it was because it couldn't figure out how to fix it, e.g. in nodemon it's a nested dependency so might struggle? Also npm audit fix sometimes fixes something by upgrading but that brakes something else, so I don't have 100% faith in it. (No bugs or articles to back this up, just anecdotal evidence).

即使使用NPM审核修复程序,我也无法修复节点漏洞-Force

更多

推荐作者

櫻之舞

文章 0 评论 0

弥枳

文章 0 评论 0

m2429

文章 0 评论 0

野却迷人

文章 0 评论 0

我怀念的。

文章 0 评论 0

    我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
    原文