Golang,解析XML网站
当我尝试解析XML站点时,对我无济于事,输出“ []”。结果,我决定下载本网站的XML文件,然后删除后 <?XML版本=“ 1.0” encoding =“ Windows-1251”?>
,然后一切都对我有用。是否可以以某种方式读取数据而不删除此片段?
type ValCurs struct {
XMLName xml.Name `xml:"ValCurs"`
Date string `xml:"Date, attr"`
Name string `xml:"name, attr"`
Valute []Valute `xml:"Valute"`
}
type Valute struct {
XMLName xml.Name `xml:"Valute"`
CharCode string `xml:"CharCode"`
Nominal string `xml:"Nominal"`
Name string `xml:"Name"`
Value string `xml:"Value"`
}
func main() {
resp, _ := http.Get("http://www.cbr.ru/scripts/XML_daily.asp")
defer resp.Body.Close()
req, _ := ioutil.ReadAll(resp.Body)
var val ValCurs
xml.Unmarshal(req, &val)
fmt.Println(val)
}
When I try to parse the xml site, nothing works for me and output "[]" . As a result, I decided to download the xml file of this site, and after I remove<?xml version="1.0" encoding="windows-1251"?>
, then everything works out for me. Is it possible to somehow read the data without deleting this fragment?
type ValCurs struct {
XMLName xml.Name `xml:"ValCurs"`
Date string `xml:"Date, attr"`
Name string `xml:"name, attr"`
Valute []Valute `xml:"Valute"`
}
type Valute struct {
XMLName xml.Name `xml:"Valute"`
CharCode string `xml:"CharCode"`
Nominal string `xml:"Nominal"`
Name string `xml:"Name"`
Value string `xml:"Value"`
}
func main() {
resp, _ := http.Get("http://www.cbr.ru/scripts/XML_daily.asp")
defer resp.Body.Close()
req, _ := ioutil.ReadAll(resp.Body)
var val ValCurs
xml.Unmarshal(req, &val)
fmt.Println(val)
}
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
好吧,也许我已经弄清楚了,但是我不确定我是否做得正确。
我安装了Charmap软件包“ github.com/aglyzov/charmap”
并切断BYTS
之后,一切都按照我想要的
Okay, maybe I figured it out, but I'm not sure if I did everything right.
I installed charmap package "github.com/aglyzov/charmap"
And cut off the byts
After that everything works as I would like