How to insert several csv files into Elasticsearch?












1















I have several csv files on university courses that all seem linked by an ID, that you can find here, and I wondered how to put them on Elasticsearch. I know, thanks to this video and Logstash, how to insert one sole file csv file to Elasticsearch. But do you know how to insert several such as those in the provided link ?



At the moment I started with a first .config file for a first csv file : ACCREDITATION.csv. But it would be painful to write them all...



The .config file is :



input{
file{
path =>"Users/mike/Data/ACCREDITATION.csv"
start_position => "begining"
sincedb_path => "/dev/null"
}
}

filter{
csv{
separator => ","
columns => ['PUBUKPRN', 'UKPRN', 'KISCOURSEID', 'KISMODE', 'ACCTYPE', 'ACCDEPEND', 'ACCDEPENDURL', 'ACCDEPENDURLW']

}
mutate{convert => ["PUBUKPRN","integer"]}
mutate{convert => ["UKPRN","integer"]}
mutate{convert => ["KISMODE","integer"]}
mutate{convert => ["ACCTYPE","integer"]}
mutate{convert => ["ACCDEPEND","integer"]}
}

output{
elasticsearch{
hosts =>"localhost"
index =>"accreditation"
document_type =>"accreditaiton keys"
}
stdout{}
}


Update May, 3rd



Without knowing how to use a .config file to implement csv files to Elasticsearch, I fell back to Elastic blog and tried to do a shell script importSVFiles for a first .csv file before trying to generalize the approach :



importCSVFiles :



#!/bin/bash
while read f1
do
curl -XPOST 'https://XXX.us-east-1.aws.found.io:9243/courses/accreditation' -H "Content-Type: application/json" -u elastic:XXX -d "{ "accreditation": "$f1" }"
done < AccreditationByHep.csv


Yet I received a mapper_parsing_exception on the terminal :



mike@mike-thinks:~/Data/on_2018_04_25_16_43_17$ ./importCSVFiles
{"error":{"root_cause":
[{"type":"mapper_parsing_exception","reason":"failed to parse"}],
"type":"mapper_parsing_exception",
"reason":"failed to parse",
"caused_by":{"type":"i_o_exception","reason":"Illegal unquoted character ((CTRL-CHAR, code 13)):
has to be escaped using backslash to be included in string valuen at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper@e18584; line: 1, column: 88]"}
},"status":400
}









share|improve this question
















bumped to the homepage by Community 1 min ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
















  • Could you add the .config definition to your question?

    – hot2use
    May 3 '18 at 6:29
















1















I have several csv files on university courses that all seem linked by an ID, that you can find here, and I wondered how to put them on Elasticsearch. I know, thanks to this video and Logstash, how to insert one sole file csv file to Elasticsearch. But do you know how to insert several such as those in the provided link ?



At the moment I started with a first .config file for a first csv file : ACCREDITATION.csv. But it would be painful to write them all...



The .config file is :



input{
file{
path =>"Users/mike/Data/ACCREDITATION.csv"
start_position => "begining"
sincedb_path => "/dev/null"
}
}

filter{
csv{
separator => ","
columns => ['PUBUKPRN', 'UKPRN', 'KISCOURSEID', 'KISMODE', 'ACCTYPE', 'ACCDEPEND', 'ACCDEPENDURL', 'ACCDEPENDURLW']

}
mutate{convert => ["PUBUKPRN","integer"]}
mutate{convert => ["UKPRN","integer"]}
mutate{convert => ["KISMODE","integer"]}
mutate{convert => ["ACCTYPE","integer"]}
mutate{convert => ["ACCDEPEND","integer"]}
}

output{
elasticsearch{
hosts =>"localhost"
index =>"accreditation"
document_type =>"accreditaiton keys"
}
stdout{}
}


Update May, 3rd



Without knowing how to use a .config file to implement csv files to Elasticsearch, I fell back to Elastic blog and tried to do a shell script importSVFiles for a first .csv file before trying to generalize the approach :



importCSVFiles :



#!/bin/bash
while read f1
do
curl -XPOST 'https://XXX.us-east-1.aws.found.io:9243/courses/accreditation' -H "Content-Type: application/json" -u elastic:XXX -d "{ "accreditation": "$f1" }"
done < AccreditationByHep.csv


Yet I received a mapper_parsing_exception on the terminal :



mike@mike-thinks:~/Data/on_2018_04_25_16_43_17$ ./importCSVFiles
{"error":{"root_cause":
[{"type":"mapper_parsing_exception","reason":"failed to parse"}],
"type":"mapper_parsing_exception",
"reason":"failed to parse",
"caused_by":{"type":"i_o_exception","reason":"Illegal unquoted character ((CTRL-CHAR, code 13)):
has to be escaped using backslash to be included in string valuen at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper@e18584; line: 1, column: 88]"}
},"status":400
}









share|improve this question
















bumped to the homepage by Community 1 min ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
















  • Could you add the .config definition to your question?

    – hot2use
    May 3 '18 at 6:29














1












1








1








I have several csv files on university courses that all seem linked by an ID, that you can find here, and I wondered how to put them on Elasticsearch. I know, thanks to this video and Logstash, how to insert one sole file csv file to Elasticsearch. But do you know how to insert several such as those in the provided link ?



At the moment I started with a first .config file for a first csv file : ACCREDITATION.csv. But it would be painful to write them all...



The .config file is :



input{
file{
path =>"Users/mike/Data/ACCREDITATION.csv"
start_position => "begining"
sincedb_path => "/dev/null"
}
}

filter{
csv{
separator => ","
columns => ['PUBUKPRN', 'UKPRN', 'KISCOURSEID', 'KISMODE', 'ACCTYPE', 'ACCDEPEND', 'ACCDEPENDURL', 'ACCDEPENDURLW']

}
mutate{convert => ["PUBUKPRN","integer"]}
mutate{convert => ["UKPRN","integer"]}
mutate{convert => ["KISMODE","integer"]}
mutate{convert => ["ACCTYPE","integer"]}
mutate{convert => ["ACCDEPEND","integer"]}
}

output{
elasticsearch{
hosts =>"localhost"
index =>"accreditation"
document_type =>"accreditaiton keys"
}
stdout{}
}


Update May, 3rd



Without knowing how to use a .config file to implement csv files to Elasticsearch, I fell back to Elastic blog and tried to do a shell script importSVFiles for a first .csv file before trying to generalize the approach :



importCSVFiles :



#!/bin/bash
while read f1
do
curl -XPOST 'https://XXX.us-east-1.aws.found.io:9243/courses/accreditation' -H "Content-Type: application/json" -u elastic:XXX -d "{ "accreditation": "$f1" }"
done < AccreditationByHep.csv


Yet I received a mapper_parsing_exception on the terminal :



mike@mike-thinks:~/Data/on_2018_04_25_16_43_17$ ./importCSVFiles
{"error":{"root_cause":
[{"type":"mapper_parsing_exception","reason":"failed to parse"}],
"type":"mapper_parsing_exception",
"reason":"failed to parse",
"caused_by":{"type":"i_o_exception","reason":"Illegal unquoted character ((CTRL-CHAR, code 13)):
has to be escaped using backslash to be included in string valuen at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper@e18584; line: 1, column: 88]"}
},"status":400
}









share|improve this question
















I have several csv files on university courses that all seem linked by an ID, that you can find here, and I wondered how to put them on Elasticsearch. I know, thanks to this video and Logstash, how to insert one sole file csv file to Elasticsearch. But do you know how to insert several such as those in the provided link ?



At the moment I started with a first .config file for a first csv file : ACCREDITATION.csv. But it would be painful to write them all...



The .config file is :



input{
file{
path =>"Users/mike/Data/ACCREDITATION.csv"
start_position => "begining"
sincedb_path => "/dev/null"
}
}

filter{
csv{
separator => ","
columns => ['PUBUKPRN', 'UKPRN', 'KISCOURSEID', 'KISMODE', 'ACCTYPE', 'ACCDEPEND', 'ACCDEPENDURL', 'ACCDEPENDURLW']

}
mutate{convert => ["PUBUKPRN","integer"]}
mutate{convert => ["UKPRN","integer"]}
mutate{convert => ["KISMODE","integer"]}
mutate{convert => ["ACCTYPE","integer"]}
mutate{convert => ["ACCDEPEND","integer"]}
}

output{
elasticsearch{
hosts =>"localhost"
index =>"accreditation"
document_type =>"accreditaiton keys"
}
stdout{}
}


Update May, 3rd



Without knowing how to use a .config file to implement csv files to Elasticsearch, I fell back to Elastic blog and tried to do a shell script importSVFiles for a first .csv file before trying to generalize the approach :



importCSVFiles :



#!/bin/bash
while read f1
do
curl -XPOST 'https://XXX.us-east-1.aws.found.io:9243/courses/accreditation' -H "Content-Type: application/json" -u elastic:XXX -d "{ "accreditation": "$f1" }"
done < AccreditationByHep.csv


Yet I received a mapper_parsing_exception on the terminal :



mike@mike-thinks:~/Data/on_2018_04_25_16_43_17$ ./importCSVFiles
{"error":{"root_cause":
[{"type":"mapper_parsing_exception","reason":"failed to parse"}],
"type":"mapper_parsing_exception",
"reason":"failed to parse",
"caused_by":{"type":"i_o_exception","reason":"Illegal unquoted character ((CTRL-CHAR, code 13)):
has to be escaped using backslash to be included in string valuen at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper@e18584; line: 1, column: 88]"}
},"status":400
}






elasticsearch






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited May 3 '18 at 13:46







ThePassenger

















asked May 2 '18 at 14:47









ThePassengerThePassenger

324218




324218





bumped to the homepage by Community 1 min ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.







bumped to the homepage by Community 1 min ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.















  • Could you add the .config definition to your question?

    – hot2use
    May 3 '18 at 6:29



















  • Could you add the .config definition to your question?

    – hot2use
    May 3 '18 at 6:29

















Could you add the .config definition to your question?

– hot2use
May 3 '18 at 6:29





Could you add the .config definition to your question?

– hot2use
May 3 '18 at 6:29










1 Answer
1






active

oldest

votes


















0














I just had a look at the data in the Higher Education Statistics Agency (HESA) zipped file and the files are all different.



This means you will either have to create an individual .config file for each import or create a single .config file using conditions as described in the following article:



Reference: How to use multiple csv files in logstash (Elastic Discuss Forum)



Expanding on your first .config by one level:



input{
file{
path =>"Users/mike/Data/ACCREDITATION.csv"
start_position => "begining"
sincedb_path => "/dev/null"
}
file{
path =>"Users/mike/Data/ACCREDITATION.csv"
start_position => "begining"
sincedb_path => "/dev/null"
}

}

filter{
# added condition for first file
if [path] == "Users/mike/Data/ACCREDITATION.csv"{
csv{
separator => ","
columns => ['PUBUKPRN', 'UKPRN', 'KISCOURSEID', 'KISMODE', 'ACCTYPE', 'ACCDEPEND', 'ACCDEPENDURL', 'ACCDEPENDURLW']

}
mutate{convert => ["PUBUKPRN","integer"]}
mutate{convert => ["UKPRN","integer"]}
mutate{convert => ["KISMODE","integer"]}
mutate{convert => ["ACCTYPE","integer"]}
mutate{convert => ["ACCDEPEND","integer"]}
}
# added condition for second file
else if [path] == "Users/mike/Data/AccreditationByHep.csv"{
csv{
separator => ","
columns => ['AccreditingBodyName', 'AccreditionType', 'HEP', 'KisCourseTitle', 'KiscourseID']
}
# ommitted mutations for second file
}

}

output{
# added condition for first file
if [path] == "Users/mike/Data/ACCREDITATION.csv"{
elasticsearch{
hosts =>"localhost"
index =>"accreditation"
document_type =>"accreditaiton keys"
}
}
# added condition for second file
else if [path] == "Users/mike/Data/AccreditationByHep.csv"{
elasticsearch{
hosts =>"localhost"
index =>"accreditationByHep"
document_type =>"accreditaitonbyhep keys"
}
}
stdout{}
}


document_type is a deprecated configuration option



You should be able to expand on this example on your own.






share|improve this answer
























  • Thanks a lot ! I'm now watching getting Started with Logstash to know how to concretely send my data with this .config file

    – ThePassenger
    May 3 '18 at 8:41











  • I've just watching the video I provided above to see how they import files but they use Filebeat and run a filebeat.yml file to store logs which is a too different approach from mine. How do you use the .config file to store the data ?

    – ThePassenger
    May 3 '18 at 13:36











Your Answer








StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "182"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f205576%2fhow-to-insert-several-csv-files-into-elasticsearch%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









0














I just had a look at the data in the Higher Education Statistics Agency (HESA) zipped file and the files are all different.



This means you will either have to create an individual .config file for each import or create a single .config file using conditions as described in the following article:



Reference: How to use multiple csv files in logstash (Elastic Discuss Forum)



Expanding on your first .config by one level:



input{
file{
path =>"Users/mike/Data/ACCREDITATION.csv"
start_position => "begining"
sincedb_path => "/dev/null"
}
file{
path =>"Users/mike/Data/ACCREDITATION.csv"
start_position => "begining"
sincedb_path => "/dev/null"
}

}

filter{
# added condition for first file
if [path] == "Users/mike/Data/ACCREDITATION.csv"{
csv{
separator => ","
columns => ['PUBUKPRN', 'UKPRN', 'KISCOURSEID', 'KISMODE', 'ACCTYPE', 'ACCDEPEND', 'ACCDEPENDURL', 'ACCDEPENDURLW']

}
mutate{convert => ["PUBUKPRN","integer"]}
mutate{convert => ["UKPRN","integer"]}
mutate{convert => ["KISMODE","integer"]}
mutate{convert => ["ACCTYPE","integer"]}
mutate{convert => ["ACCDEPEND","integer"]}
}
# added condition for second file
else if [path] == "Users/mike/Data/AccreditationByHep.csv"{
csv{
separator => ","
columns => ['AccreditingBodyName', 'AccreditionType', 'HEP', 'KisCourseTitle', 'KiscourseID']
}
# ommitted mutations for second file
}

}

output{
# added condition for first file
if [path] == "Users/mike/Data/ACCREDITATION.csv"{
elasticsearch{
hosts =>"localhost"
index =>"accreditation"
document_type =>"accreditaiton keys"
}
}
# added condition for second file
else if [path] == "Users/mike/Data/AccreditationByHep.csv"{
elasticsearch{
hosts =>"localhost"
index =>"accreditationByHep"
document_type =>"accreditaitonbyhep keys"
}
}
stdout{}
}


document_type is a deprecated configuration option



You should be able to expand on this example on your own.






share|improve this answer
























  • Thanks a lot ! I'm now watching getting Started with Logstash to know how to concretely send my data with this .config file

    – ThePassenger
    May 3 '18 at 8:41











  • I've just watching the video I provided above to see how they import files but they use Filebeat and run a filebeat.yml file to store logs which is a too different approach from mine. How do you use the .config file to store the data ?

    – ThePassenger
    May 3 '18 at 13:36
















0














I just had a look at the data in the Higher Education Statistics Agency (HESA) zipped file and the files are all different.



This means you will either have to create an individual .config file for each import or create a single .config file using conditions as described in the following article:



Reference: How to use multiple csv files in logstash (Elastic Discuss Forum)



Expanding on your first .config by one level:



input{
file{
path =>"Users/mike/Data/ACCREDITATION.csv"
start_position => "begining"
sincedb_path => "/dev/null"
}
file{
path =>"Users/mike/Data/ACCREDITATION.csv"
start_position => "begining"
sincedb_path => "/dev/null"
}

}

filter{
# added condition for first file
if [path] == "Users/mike/Data/ACCREDITATION.csv"{
csv{
separator => ","
columns => ['PUBUKPRN', 'UKPRN', 'KISCOURSEID', 'KISMODE', 'ACCTYPE', 'ACCDEPEND', 'ACCDEPENDURL', 'ACCDEPENDURLW']

}
mutate{convert => ["PUBUKPRN","integer"]}
mutate{convert => ["UKPRN","integer"]}
mutate{convert => ["KISMODE","integer"]}
mutate{convert => ["ACCTYPE","integer"]}
mutate{convert => ["ACCDEPEND","integer"]}
}
# added condition for second file
else if [path] == "Users/mike/Data/AccreditationByHep.csv"{
csv{
separator => ","
columns => ['AccreditingBodyName', 'AccreditionType', 'HEP', 'KisCourseTitle', 'KiscourseID']
}
# ommitted mutations for second file
}

}

output{
# added condition for first file
if [path] == "Users/mike/Data/ACCREDITATION.csv"{
elasticsearch{
hosts =>"localhost"
index =>"accreditation"
document_type =>"accreditaiton keys"
}
}
# added condition for second file
else if [path] == "Users/mike/Data/AccreditationByHep.csv"{
elasticsearch{
hosts =>"localhost"
index =>"accreditationByHep"
document_type =>"accreditaitonbyhep keys"
}
}
stdout{}
}


document_type is a deprecated configuration option



You should be able to expand on this example on your own.






share|improve this answer
























  • Thanks a lot ! I'm now watching getting Started with Logstash to know how to concretely send my data with this .config file

    – ThePassenger
    May 3 '18 at 8:41











  • I've just watching the video I provided above to see how they import files but they use Filebeat and run a filebeat.yml file to store logs which is a too different approach from mine. How do you use the .config file to store the data ?

    – ThePassenger
    May 3 '18 at 13:36














0












0








0







I just had a look at the data in the Higher Education Statistics Agency (HESA) zipped file and the files are all different.



This means you will either have to create an individual .config file for each import or create a single .config file using conditions as described in the following article:



Reference: How to use multiple csv files in logstash (Elastic Discuss Forum)



Expanding on your first .config by one level:



input{
file{
path =>"Users/mike/Data/ACCREDITATION.csv"
start_position => "begining"
sincedb_path => "/dev/null"
}
file{
path =>"Users/mike/Data/ACCREDITATION.csv"
start_position => "begining"
sincedb_path => "/dev/null"
}

}

filter{
# added condition for first file
if [path] == "Users/mike/Data/ACCREDITATION.csv"{
csv{
separator => ","
columns => ['PUBUKPRN', 'UKPRN', 'KISCOURSEID', 'KISMODE', 'ACCTYPE', 'ACCDEPEND', 'ACCDEPENDURL', 'ACCDEPENDURLW']

}
mutate{convert => ["PUBUKPRN","integer"]}
mutate{convert => ["UKPRN","integer"]}
mutate{convert => ["KISMODE","integer"]}
mutate{convert => ["ACCTYPE","integer"]}
mutate{convert => ["ACCDEPEND","integer"]}
}
# added condition for second file
else if [path] == "Users/mike/Data/AccreditationByHep.csv"{
csv{
separator => ","
columns => ['AccreditingBodyName', 'AccreditionType', 'HEP', 'KisCourseTitle', 'KiscourseID']
}
# ommitted mutations for second file
}

}

output{
# added condition for first file
if [path] == "Users/mike/Data/ACCREDITATION.csv"{
elasticsearch{
hosts =>"localhost"
index =>"accreditation"
document_type =>"accreditaiton keys"
}
}
# added condition for second file
else if [path] == "Users/mike/Data/AccreditationByHep.csv"{
elasticsearch{
hosts =>"localhost"
index =>"accreditationByHep"
document_type =>"accreditaitonbyhep keys"
}
}
stdout{}
}


document_type is a deprecated configuration option



You should be able to expand on this example on your own.






share|improve this answer













I just had a look at the data in the Higher Education Statistics Agency (HESA) zipped file and the files are all different.



This means you will either have to create an individual .config file for each import or create a single .config file using conditions as described in the following article:



Reference: How to use multiple csv files in logstash (Elastic Discuss Forum)



Expanding on your first .config by one level:



input{
file{
path =>"Users/mike/Data/ACCREDITATION.csv"
start_position => "begining"
sincedb_path => "/dev/null"
}
file{
path =>"Users/mike/Data/ACCREDITATION.csv"
start_position => "begining"
sincedb_path => "/dev/null"
}

}

filter{
# added condition for first file
if [path] == "Users/mike/Data/ACCREDITATION.csv"{
csv{
separator => ","
columns => ['PUBUKPRN', 'UKPRN', 'KISCOURSEID', 'KISMODE', 'ACCTYPE', 'ACCDEPEND', 'ACCDEPENDURL', 'ACCDEPENDURLW']

}
mutate{convert => ["PUBUKPRN","integer"]}
mutate{convert => ["UKPRN","integer"]}
mutate{convert => ["KISMODE","integer"]}
mutate{convert => ["ACCTYPE","integer"]}
mutate{convert => ["ACCDEPEND","integer"]}
}
# added condition for second file
else if [path] == "Users/mike/Data/AccreditationByHep.csv"{
csv{
separator => ","
columns => ['AccreditingBodyName', 'AccreditionType', 'HEP', 'KisCourseTitle', 'KiscourseID']
}
# ommitted mutations for second file
}

}

output{
# added condition for first file
if [path] == "Users/mike/Data/ACCREDITATION.csv"{
elasticsearch{
hosts =>"localhost"
index =>"accreditation"
document_type =>"accreditaiton keys"
}
}
# added condition for second file
else if [path] == "Users/mike/Data/AccreditationByHep.csv"{
elasticsearch{
hosts =>"localhost"
index =>"accreditationByHep"
document_type =>"accreditaitonbyhep keys"
}
}
stdout{}
}


document_type is a deprecated configuration option



You should be able to expand on this example on your own.







share|improve this answer












share|improve this answer



share|improve this answer










answered May 3 '18 at 8:10









hot2usehot2use

8,15952055




8,15952055













  • Thanks a lot ! I'm now watching getting Started with Logstash to know how to concretely send my data with this .config file

    – ThePassenger
    May 3 '18 at 8:41











  • I've just watching the video I provided above to see how they import files but they use Filebeat and run a filebeat.yml file to store logs which is a too different approach from mine. How do you use the .config file to store the data ?

    – ThePassenger
    May 3 '18 at 13:36



















  • Thanks a lot ! I'm now watching getting Started with Logstash to know how to concretely send my data with this .config file

    – ThePassenger
    May 3 '18 at 8:41











  • I've just watching the video I provided above to see how they import files but they use Filebeat and run a filebeat.yml file to store logs which is a too different approach from mine. How do you use the .config file to store the data ?

    – ThePassenger
    May 3 '18 at 13:36

















Thanks a lot ! I'm now watching getting Started with Logstash to know how to concretely send my data with this .config file

– ThePassenger
May 3 '18 at 8:41





Thanks a lot ! I'm now watching getting Started with Logstash to know how to concretely send my data with this .config file

– ThePassenger
May 3 '18 at 8:41













I've just watching the video I provided above to see how they import files but they use Filebeat and run a filebeat.yml file to store logs which is a too different approach from mine. How do you use the .config file to store the data ?

– ThePassenger
May 3 '18 at 13:36





I've just watching the video I provided above to see how they import files but they use Filebeat and run a filebeat.yml file to store logs which is a too different approach from mine. How do you use the .config file to store the data ?

– ThePassenger
May 3 '18 at 13:36


















draft saved

draft discarded




















































Thanks for contributing an answer to Database Administrators Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f205576%2fhow-to-insert-several-csv-files-into-elasticsearch%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Liste der Baudenkmale in Friedland (Mecklenburg)

Single-Malt-Whisky

Czorneboh