经常要批量读取网页文件,有时候URL会报错,循环的程序就断开了,很麻烦。学习到一种tryCatch
的方法。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
| for (i in 1:nrow(kegg.id)) { tryCatch({ Sys.sleep(0.5) KEGGREST::keggGet(kegg.id$id[i]) -> temp }, error = function(i){ print(i) }, finally = { if (is.null(temp[[1]][["PATHWAY"]])) { next }else{ temp[[1]][["PATHWAY"]] %>% as.data.frame() %>% magrittr::set_names(c("kegg.pathway")) %>% tibble::rownames_to_column(var = "kegg.map.id") %>% dplyr::mutate(kegg.id = i) %>% rbind(all.des) -> all.des print(paste0("已经成功运行:",i, "/", nrow(kegg.id))) } }) }
|