If you upgrade to(or already use) 11GR2 clusterware, you can use ACFS.
It works quite well for a similar problem on our system.
De : oracle-l-bounce_at_freelists.org [oracle-l-bounce_at_freelists.org] de la part de Matthew Zito [mzito_at_gridapp.com]
Date d'envoi : mercredi 29 septembre 2010 18:31
À : niall.litchfield_at_gmail.com
Cc : ORACLE-L
Objet : Re: File Processing Question
You could add a VIP to the rac cluster, attach it to one of the nodes, and have the FTP client connect to that VIP. Then, when a node fails, the VIP will roll over to the other node, and the next time the FTP client connects, it'll connect to the surviing node.
Or are you concerned about the log files on the down node that haven't yet been processed?
On Sep 29, 2010, at 12:24 PM, "Niall Litchfield" > wrote:
After the wisdom of crowds here.
Consider a system that processes files uploaded by ftp to the DB server. Currently the upload directory is polled periodically for new files (since they don't all arrive on a predictable schedule with predictable names). Any new files are processed and then moved to an archive location so that they aren't reprocessed. The polling and processing is done by java stored procedures. This system is a RAC system with no shared filesystem storage. The jobs that poll run on a particular instance via the 10g Job Class trick. The question that I have is how would you implement resilience to node failure for this system. It seems to me that we could do
add shared storage - at a cost probably.
ftp the files directly to the db - implies code changes probably
Does anyone else do anything similar and if so how?