春季批处理作业在使用MapFieldSet用于读取只有一列的TXT文件时会引发异常

发布于 2025-01-22 12:24:38 字数 2149 浏览 2 评论 0原文

我只有一个只有一列的TXT文件:IDS和IDS由新行分开。 我想和读者一起阅读此文件,但是我认为我不应该使用DelimitedLineTokenizer,因为我的文件没有多个列。这是代码:

<bean id="idsReader" class="org.springframework.batch.item.file.FlatFileItemReader" scope="step">
        <property name="lineMapper">
            <bean class="org.springframework.batch.item.file.mapping.DefaultLineMapper">
                <property name="lineTokenizer">
                    <bean class="org.springframework.batch.item.file.transform.DelimitedLineTokenizer">
                        <property name="names" value="ids" />
                    </bean>
                </property>
                <property name="fieldSetMapper">
                    <bean class="IdsMapper" />
                </property>
            </bean>
        </property>
        <property name="resource" value="#{stepExecutionContext['fileResource']}" />
        <property name="encoding" value="UTF-8" />
    </bean>

public class IdsMapper implements FieldSetMapper<String> {

    @Override
    public String mapFieldSet(FieldSet fs) throws BindException {
        if (fs == null) {
            return null;
        }
        return fs.readString("ids");
    }

这是我收到的例外:

[jobTaskExecutor-15] ERROR Encountered an error executing step loadIds
Message: IdsMapper.mapFieldSet(Lorg/springframework/batch/item/file/transform/FieldSet;)Ljava/lang/String;
    Line | Method
->>   -1 | mapFieldSet                    in batch.model.IdsMapper
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 
|      7 | mapFieldSet                    in batch.model.IdsMapper$$ET3XDpZ2
|     -1 | mapFieldSet . . . . . . . . .  in batch.model.IdsMapper$$DT3XDpZ2
|     43 | mapLine                        in org.springframework.batch.item.file.mapping.DefaultLineMapper
|    180 | doRead . . . . . . . . . . . . in org.springframework.batch.item.file.FlatFileItemReader
|     88 | read                           in org.springframework.batch.item.support.AbstractItemCountingItemStreamItemReader

I have a txt file with just one column: ids and the ids are separated by new line.
I want to read this file with a reader, but I think that I shouldn't use DelimitedLineTokenizer because my file doesn't have multiple columns. Here is the code:

<bean id="idsReader" class="org.springframework.batch.item.file.FlatFileItemReader" scope="step">
        <property name="lineMapper">
            <bean class="org.springframework.batch.item.file.mapping.DefaultLineMapper">
                <property name="lineTokenizer">
                    <bean class="org.springframework.batch.item.file.transform.DelimitedLineTokenizer">
                        <property name="names" value="ids" />
                    </bean>
                </property>
                <property name="fieldSetMapper">
                    <bean class="IdsMapper" />
                </property>
            </bean>
        </property>
        <property name="resource" value="#{stepExecutionContext['fileResource']}" />
        <property name="encoding" value="UTF-8" />
    </bean>

public class IdsMapper implements FieldSetMapper<String> {

    @Override
    public String mapFieldSet(FieldSet fs) throws BindException {
        if (fs == null) {
            return null;
        }
        return fs.readString("ids");
    }

and here is the exception I receive:

[jobTaskExecutor-15] ERROR Encountered an error executing step loadIds
Message: IdsMapper.mapFieldSet(Lorg/springframework/batch/item/file/transform/FieldSet;)Ljava/lang/String;
    Line | Method
->>   -1 | mapFieldSet                    in batch.model.IdsMapper
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 
|      7 | mapFieldSet                    in batch.model.IdsMapper$ET3XDpZ2
|     -1 | mapFieldSet . . . . . . . . .  in batch.model.IdsMapper$DT3XDpZ2
|     43 | mapLine                        in org.springframework.batch.item.file.mapping.DefaultLineMapper
|    180 | doRead . . . . . . . . . . . . in org.springframework.batch.item.file.FlatFileItemReader
|     88 | read                           in org.springframework.batch.item.support.AbstractItemCountingItemStreamItemReader

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

已下线请稍等 2025-01-29 12:24:38

您可以使用自己的LineMapper实施:

    <bean id="idsReader" class="org.springframework.batch.item.file.FlatFileItemReader" scope="step">
        <property name="lineMapper">
            <bean class="com.example.TextFileLineMapper"/>
        </property>        
        <property name="resource" value="file:#{stepExecutionContext['resource']}"/>
        <property name="encoding" value="UTF-8" />
    </bean>
public class TextFileLineMapper implements LineMapper<TextFileDto> {

    @Override
    public TextFileDto mapLine(String line, int lineNumber) {
        TextFileDto textFileDto = new TextFileDto();
        textFileDto.setId(line);
        return textFileDto;
    }
}
public class TextFileDto {
    private String id;
    public String getId() {
        return id;
    }
    public void setId(String id) {
        this.id = id;
    }
}

You could use your own implementation of linemapper:

    <bean id="idsReader" class="org.springframework.batch.item.file.FlatFileItemReader" scope="step">
        <property name="lineMapper">
            <bean class="com.example.TextFileLineMapper"/>
        </property>        
        <property name="resource" value="file:#{stepExecutionContext['resource']}"/>
        <property name="encoding" value="UTF-8" />
    </bean>
public class TextFileLineMapper implements LineMapper<TextFileDto> {

    @Override
    public TextFileDto mapLine(String line, int lineNumber) {
        TextFileDto textFileDto = new TextFileDto();
        textFileDto.setId(line);
        return textFileDto;
    }
}
public class TextFileDto {
    private String id;
    public String getId() {
        return id;
    }
    public void setId(String id) {
        this.id = id;
    }
}
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文