我正在为一些神经网络学习目的创建数据集。之前,我使用for循环来连接和生成句子,但由于过程太慢,我改用了foreach来实现句子生成。过程很快,在50秒内完成。我只是使用模板进行槽填充,然后将它们拼接成句子,但输出却出现了混乱(单词拼写错误,单词之间出现未知空格,单词本身丢失等情况)。
library(foreach)library(doParallel)library(tictoc)tic("Data preparation - parallel mode")cl <- makeCluster(3)registerDoParallel(cl)f_sentences<-c();sentences<-c()hr=38:180;fl=1:5;month=1:5strt<-Sys.time()a<-foreach(hr=38:180,.packages = c('foreach','doParallel')) %dopar% { foreach(fl=1:5,.packages = c('foreach','doParallel')) %dopar%{ foreach(month=1:5,.packages = c('foreach','doParallel')) %dopar% { if(hr>=35 & hr<=44){ sentences<-paste("About",toString(hr),"soldiers died in the battle (count being severly_low).","Around",toString(fl), "soldiers and civilians went missing. We only have about",(sample(38:180,1)),"crates which lasts for",toString(month),"months as food supply") f_sentences<-c(f_sentences,sentences);outfile<-unname(f_sentences)} if(hr>=45 & hr<=59){ sentences<-paste("About",toString(hr),"soldiers died in the battle (count being low).","Around",toString(fl), "soldiers and civilians went missing. We only have about",(sample(38:180,1)),"crates which lasts for",toString(month),"months as food supply") f_sentences<-c(f_sentences,sentences);outfile<-unname(f_sentences)} if(hr>=60 & hr<=100){ sentences<-paste("About",toString(hr),"soldiers died in the battle (count being medium).","Around",toString(fl), "soldiers and civilians went missing. We only have about",(sample(38:180,1)),"crates which lasts for",toString(month),"months as food supply") f_sentences<-c(f_sentences,sentences);outfile<-unname(f_sentences)} if(hr>=101 & hr<=150){ sentences<-paste("About",toString(hr),"soldiers died in the battle (count being high).","Around",toString(fl), "soldiers and civilians went missing. We only have about",(sample(38:180,1)),"crates which lasts for",toString(month),"months as food supply") f_sentences<-c(f_sentences,sentences);outfile<-unname(f_sentences)} if(hr>=151 & hr<=180){ sentences<-paste("About",toString(hr),"soldiers died in the battle (count being severly_high).","Around",toString(fl), "soldiers and civilians went missing. We only have about",(sample(38:180,1)),"crates which lasts for",toString(month),"months as food supply") f_sentences<-c(f_sentences,sentences);outfile<-unname(f_sentences)} return(outfile) } write.table(outfile,file="/home/outfile.txt",append = T,row.names = F,col.names = F) gc() }}stopCluster(cl)toc()
生成文件的统计数据如下:
- 行数:427,975
- 分割方式:单词分割(” “)
-
词汇量:567
path<-"/home/outfile.txt"
File<-(fread(path,sep = "\n",header = F))[[1]]
corpus<-tolower(File) %>%
#removePunctuation() %>%
strsplit(splitting) %>%
unlist()
vocab<-unique(corpus)
像这样的简单句子应该只有很少的词汇量,因为这里只有数字是变化的参数。通过检查词汇输出并使用grep命令,我发现句子中出现了很多混乱的单词(也有一些单词丢失),例如wentt、crpply等,这些通常不应该出现在我固定的模板中。
预期句子
“About 40 soldiers died in the battle (count being severly_low). Around 1 soldiers and civilians went missing. We only have about 146 crates which lasts for 1 months as food supply”grep -rnw ‘outfile.txt’ -e ‘wentt’
24105:”About 62 soldiers died in the battle (count being medium). Around 2 soldiers and civilians wentt 117 crates which lasts for 1 months as food supply”grep -rnw ‘outfile.txt’ -e ‘crpply’
76450:”About 73 soldiers died in the battle (count being medium). Around 1 soldiers and civilians went missing. We only have about 133 crpply”在生成的前几个句子中,生成是正确的,之后就出现了问题。导致这种情况的原因是什么?我只是执行了正常的粘贴和槽填充操作。任何帮助都将不胜感激!
回答:
代码现在运行正常,没有更多错误。我认为上次的错误可能是由于一次故障引起的。在其他机器上测试了不同版本的R,仍然没有问题。