neo4j periodic commit apoc. load. jdbc

I have an Oracle table with > 200m rows. I want to use APOC load. Load JDBC into neo4j How to finish this work without exhausting your memory Basically, I want to act like

USING PERIODIC COMMIT  
CALL apoc.load.jdbc('alias','table_name') yield row  
MATCH (r:Result {result_id:row.RESULT_ID})   
MATCH (g:Gene   {gene_id  :row.ENTITY_ID})  
create (r)-[:EXP {expression_level:row.EXPRESSION_LEVEL}]->(g)

However, it seems that using periodic commit can only be used with load CSV and APOC load. The following error occurred while JDBC was trying

I've checked APOC periodic. Iterate and APOC periodic. Commit, but the former attempts to read the entire table into memory and then iterate, while the latter repeats the same query over and over again, which doesn't work here

Oracle tables are partitioned. I can use where filter to load one partition at a time. However, some partitions still contain more data than in memory

I can't be the first person with this problem, can I?

Thank you in advance

Solution

It may be a little late, but for others, I encountered the same problem. A large query killed my machine and used APOC period. Iterate was greatly helped You can use batch size to see the method that suits you The retries parameter reruns any failed batch (another part of the query may need to be run before the failed part is completed)

CALL apoc.periodic.iterate('CALL apoc.load.jdbc('alias','table_name') yield row','
MATCH (r:Result {result_id:row.RESULT_ID})   
MATCH (g:Gene   {gene_id  :row.ENTITY_ID})  
create (r)-[:EXP {expression_level:row.EXPRESSION_LEVEL}]->(g)',{batchSize:10000,iterateList:true,parallel:true,retries:20})
The content of this article comes from the network collection of netizens. It is used as a learning reference. The copyright belongs to the original author.
THE END
分享
二维码
< <上一篇
下一篇>>