A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://golang.org/pkg/encoding/csv/ below:

csv package - encoding/csv - Go Packages

Package csv reads and writes comma-separated values (CSV) files. There are many kinds of CSV files; this package supports the format described in RFC 4180, except that Writer uses LF instead of CRLF as newline character by default.

A csv file contains zero or more records of one or more fields per record. Each record is separated by the newline character. The final record may optionally be followed by a newline character.

field1,field2,field3

White space is considered part of a field.

Carriage returns before newline characters are silently removed.

Blank lines are ignored. A line with only whitespace characters (excluding the ending newline character) is not considered a blank line.

Fields which start and stop with the quote character " are called quoted-fields. The beginning and ending quote are not part of the field.

The source:

normal string,"quoted-field"

results in the fields

{`normal string`, `quoted-field`}

Within a quoted-field a quote character followed by a second quote character is considered a single quote.

"the ""word"" is true","a ""quoted-field"""

results in

{`the "word" is true`, `a "quoted-field"`}

Newlines and commas may be included in a quoted-field

"Multi-line
field","comma is ,"

results in

{`Multi-line
field`, `comma is ,`}

This section is empty.

View Source
var (
	ErrBareQuote  = errors.New("bare \" in non-quoted-field")
	ErrQuote      = errors.New("extraneous or missing \" in quoted-field")
	ErrFieldCount = errors.New("wrong number of fields")

	
	ErrTrailingComma = errors.New("extra delimiter at end of line")
)

These are the errors that can be returned in [ParseError.Err].

This section is empty.

type ParseError struct {
	StartLine int   
	Line      int   
	Column    int   
	Err       error 
}

A ParseError is returned for parsing errors. Line and column numbers are 1-indexed.

type Reader struct {
	
	
	
	
	Comma rune

	
	
	
	
	
	
	Comment rune

	
	
	
	
	
	
	FieldsPerRecord int

	
	
	LazyQuotes bool

	
	
	TrimLeadingSpace bool

	
	
	
	ReuseRecord bool

	
	TrailingComma bool
	
}

A Reader reads records from a CSV-encoded file.

As returned by NewReader, a Reader expects input conforming to RFC 4180. The exported fields can be changed to customize the details before the first call to Reader.Read or Reader.ReadAll.

The Reader converts all \r\n sequences in its input to plain \n, including in multiline field values, so that the returned data does not depend on which line-ending convention an input file uses.

package main

import (
	"encoding/csv"
	"fmt"
	"io"
	"log"
	"strings"
)

func main() {
	in := `first_name,last_name,username
"Rob","Pike",rob
Ken,Thompson,ken
"Robert","Griesemer","gri"
`
	r := csv.NewReader(strings.NewReader(in))

	for {
		record, err := r.Read()
		if err == io.EOF {
			break
		}
		if err != nil {
			log.Fatal(err)
		}

		fmt.Println(record)
	}
}
Output:

[first_name last_name username]
[Rob Pike rob]
[Ken Thompson ken]
[Robert Griesemer gri]

This example shows how csv.Reader can be configured to handle other types of CSV files.

package main

import (
	"encoding/csv"
	"fmt"
	"log"
	"strings"
)

func main() {
	in := `first_name;last_name;username
"Rob";"Pike";rob
# lines beginning with a # character are ignored
Ken;Thompson;ken
"Robert";"Griesemer";"gri"
`
	r := csv.NewReader(strings.NewReader(in))
	r.Comma = ';'
	r.Comment = '#'

	records, err := r.ReadAll()
	if err != nil {
		log.Fatal(err)
	}

	fmt.Print(records)
}
Output:

[[first_name last_name username] [Rob Pike rob] [Ken Thompson ken] [Robert Griesemer gri]]

NewReader returns a new Reader that reads from r.

FieldPos returns the line and column corresponding to the start of the field with the given index in the slice most recently returned by Reader.Read. Numbering of lines and columns starts at 1; columns are counted in bytes, not runes.

If this is called with an out-of-bounds index, it panics.

InputOffset returns the input stream byte offset of the current reader position. The offset gives the location of the end of the most recently read row and the beginning of the next row.

Read reads one record (a slice of fields) from r. If the record has an unexpected number of fields, Read returns the record along with the error ErrFieldCount. If the record contains a field that cannot be parsed, Read returns a partial record along with the parse error. The partial record contains all fields read before the error. If there is no data left to be read, Read returns nil, io.EOF. If [Reader.ReuseRecord] is true, the returned slice may be shared between multiple calls to Read.

ReadAll reads all the remaining records from r. Each record is a slice of fields. A successful call returns err == nil, not err == io.EOF. Because ReadAll is defined to read until EOF, it does not treat end of file as an error to be reported.

package main

import (
	"encoding/csv"
	"fmt"
	"log"
	"strings"
)

func main() {
	in := `first_name,last_name,username
"Rob","Pike",rob
Ken,Thompson,ken
"Robert","Griesemer","gri"
`
	r := csv.NewReader(strings.NewReader(in))

	records, err := r.ReadAll()
	if err != nil {
		log.Fatal(err)
	}

	fmt.Print(records)
}
Output:

[[first_name last_name username] [Rob Pike rob] [Ken Thompson ken] [Robert Griesemer gri]]
type Writer struct {
	Comma   rune 
	UseCRLF bool 
	
}

A Writer writes records using CSV encoding.

As returned by NewWriter, a Writer writes records terminated by a newline and uses ',' as the field delimiter. The exported fields can be changed to customize the details before the first call to Writer.Write or Writer.WriteAll.

[Writer.Comma] is the field delimiter.

If [Writer.UseCRLF] is true, the Writer ends each output line with \r\n instead of \n.

The writes of individual records are buffered. After all data has been written, the client should call the Writer.Flush method to guarantee all data has been forwarded to the underlying io.Writer. Any errors that occurred should be checked by calling the Writer.Error method.

package main

import (
	"encoding/csv"
	"log"
	"os"
)

func main() {
	records := [][]string{
		{"first_name", "last_name", "username"},
		{"Rob", "Pike", "rob"},
		{"Ken", "Thompson", "ken"},
		{"Robert", "Griesemer", "gri"},
	}

	w := csv.NewWriter(os.Stdout)

	for _, record := range records {
		if err := w.Write(record); err != nil {
			log.Fatalln("error writing record to csv:", err)
		}
	}

	// Write any buffered data to the underlying writer (standard output).
	w.Flush()

	if err := w.Error(); err != nil {
		log.Fatal(err)
	}
}
Output:

first_name,last_name,username
Rob,Pike,rob
Ken,Thompson,ken
Robert,Griesemer,gri

NewWriter returns a new Writer that writes to w.

Flush writes any buffered data to the underlying io.Writer. To check if an error occurred during Flush, call Writer.Error.

Write writes a single CSV record to w along with any necessary quoting. A record is a slice of strings with each string being one field. Writes are buffered, so Writer.Flush must eventually be called to ensure that the record is written to the underlying io.Writer.

WriteAll writes multiple CSV records to w using Writer.Write and then calls Writer.Flush, returning any error from the Flush.

package main

import (
	"encoding/csv"
	"log"
	"os"
)

func main() {
	records := [][]string{
		{"first_name", "last_name", "username"},
		{"Rob", "Pike", "rob"},
		{"Ken", "Thompson", "ken"},
		{"Robert", "Griesemer", "gri"},
	}

	w := csv.NewWriter(os.Stdout)
	w.WriteAll(records) // calls Flush internally

	if err := w.Error(); err != nil {
		log.Fatalln("error writing csv:", err)
	}
}
Output:

first_name,last_name,username
Rob,Pike,rob
Ken,Thompson,ken
Robert,Griesemer,gri

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4