linux - Spliting file by columns -
i know cut command can cut column(s) file, can use split file multiple files each file named first line in column , there same number of produced files there columns in original file
example (edit)
columns separated tab , can of different length. first file have names of rows.
probe file1.txt file2.txt file3.txt "1007_s_at" 7.84390328616472 7.60792223630275 7.77487266222512 ...
also thing original file extremely huge, want solution split in 1 run. not calling cut repeatedly
can 1 line of awk:
$ cat test.tsv field1 field2 field3 field4 asdf asdf asdf asdf lkjlkj lkjlkj lkjlkj lkjlkj feh feh feh bmeh $ awk -f'\t' 'nr==1 { for(i=1;i<=nf;i++) { names[i] = $i }; next } { for(i=1;i<=nf;i++) print $i >> names[i] }' test.tsv $ ls field1 field2 field3 field4 test.tsv $ cat field4 asdf lkjlkj bmeh
edited include tab separator courtesy of glenn jackman
addition
removing double quotes fields:
awk -f'\t' 'nr==1 { for(i=1;i<=nf;i++) { names[i] = $i }; next } { for(i=1;i<=nf;i++) {gsub(/"/,"",$i); print $i >> names[i] }}' example.tsv
additional addition
removing double quotes fields, @ start or end of field:
awk -f'\t' 'nr==1 { for(i=1;i<=nf;i++) { names[i] = $i }; next } { for(i=1;i<=nf;i++) {gsub(/^"|"$/,"",$i); print $i >> names[i] }}' example.tsv
Comments
Post a Comment