FAQ
Hi Everyone,

I am using sqlite3 with python2.5 and the pysqlite wrapper. I am trying
to copy tables from one database (in memory) to another database (file)
using ATTACH. I looked on the internet and found a couple of sites that show
how to do this but the table schema is not copied.

def SaveToFile(self,filename):
# Attach external db file - give it handle filename
# Attaching automatically creates a database by default
self.__curs.execute("ATTACH %s AS %s" % (filename,filename))
table_names=self.__curs.execute("SELECT name FROM main.sqlite_master
WHERE type='table'").fetchall()
for table_name, in table_names:
#copy in mem db to persistent db
self.__curs.execute("CREATE TABLE %s.%s AS SELECT * FROM
main.%s" % (filename,table_name,table_name))
self.__curs.execute("DETACH %s" % filename) # Detach external db"""


Is there a way to copy the schema from the sqlite_master table. I know we
can used iter.dump from within python to dump to text based SQL but I would
prefer not to do this. Eventually my goal is to dynamically load and unload
sections of a file based database (could be tables or rows) in and out of
memory for effeciency purposes. Any help is appreciated. Thanks in advance.

Denis
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-list/attachments/20100830/cf5a1a33/attachment.html>

Search Discussions

  • Benjamin Peterson at Aug 30, 2010 at 7:00 pm
    Denis Gomes <denisg640 <at> gmail.com> writes:
    Eventually my goal is to dynamically load and unload sections of a file based
    database (could be tables or rows) in and out of memory for effeciency purposes.

    Have you actually found this to be an useful optimization? SQLite already
    internally caches database information in memory.
  • Denis Gomes at Aug 30, 2010 at 7:33 pm
    Hey Benjamin,

    Take a look at this website I found about cached and in-memory databases.
    I think the gist of the article is that caching is good if you are doing
    SELECTs on data that is frequently used whereas in-memory speeds up
    writes, (inserts and updates) to the db as well as querying. Maybe I am
    missing something?

    http://www.mcobject.com/in_memory_database

    Denis
    <http://www.mcobject.com/in_memory_database>

    On Mon, Aug 30, 2010 at 3:00 PM, Benjamin Peterson wrote:

    Denis Gomes <denisg640 <at> gmail.com> writes:
    Eventually my goal is to dynamically load and unload sections of a file
    based
    database (could be tables or rows) in and out of memory for effeciency
    purposes.

    Have you actually found this to be an useful optimization? SQLite already
    internally caches database information in memory.




    --
    http://mail.python.org/mailman/listinfo/python-list
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: <http://mail.python.org/pipermail/python-list/attachments/20100830/5a0cf576/attachment.html>
  • Benjamin Peterson at Aug 30, 2010 at 7:51 pm
    Denis Gomes <denisg640 <at> gmail.com> writes:

    Hey Benjamin,
    ?
    ?Take a look at this website I found about cached and in-memory databases.? I
    think the gist of?the article?is that caching is good if you are doing SELECTs
    on data that is?frequently used whereas in-memory speeds up writes,?(inserts and
    updates)?to the db as well as querying.?Maybe I am missing something?

    Well, of course, but there's little point to doing INSERTs and UPDATEs if you
    don't write them to disk at some point. You could just have a long running
    transaction which will not write to the database file (though depending on how
    you have sqlite setup it may write to a journal file) until you commit it.
  • Denis Gomes at Aug 30, 2010 at 8:50 pm
    Yep, I see what you are saying. I am going to do a bit more research to see
    how sqlite3 works internally, ie. cache size, page size, etc, and then
    decide if I will need to mess with in-memory databases.

    Thanks for your insight, appreciate it.

    Denis
    On Mon, Aug 30, 2010 at 3:51 PM, Benjamin Peterson wrote:

    Denis Gomes <denisg640 <at> gmail.com> writes:

    Hey Benjamin,

    Take a look at this website I found about cached and in-memory
    databases. I
    think the gist of the article is that caching is good if you are doing
    SELECTs
    on data that is frequently used whereas in-memory speeds up
    writes, (inserts and
    updates) to the db as well as querying. Maybe I am missing something?

    Well, of course, but there's little point to doing INSERTs and UPDATEs if
    you
    don't write them to disk at some point. You could just have a long running
    transaction which will not write to the database file (though depending on
    how
    you have sqlite setup it may write to a journal file) until you commit it.




    --
    http://mail.python.org/mailman/listinfo/python-list
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: <http://mail.python.org/pipermail/python-list/attachments/20100830/165b996e/attachment.html>
  • John Nagle at Aug 30, 2010 at 8:50 pm

    On 8/30/2010 12:00 PM, Benjamin Peterson wrote:
    Denis Gomes<denisg640<at> gmail.com> writes:
    Eventually my goal is to dynamically load and unload sections of a file based
    database (could be tables or rows) in and out of memory for effeciency purposes.

    Have you actually found this to be an useful optimization? SQLite already
    internally caches database information in memory.
    In general, if you find yourself concerned about sqlite performance,
    it's time to move to MySQL or Postgres. "sqlite" is for simple,
    lightweight databases mostly accessed by one process at a time.
    You store your program's configuration options in sqlite. You
    store your web site's business transactions in MySQL.

    "sqlite" has reasonably good SELECT performance on simple indices,
    but anything beyond that isn't all that great. Multiple processes
    updating the same sqlite database will have terrible performance,
    because the locking mechanism not only locks the entire table,
    the wait mechanism is a timed retry.

    John Nagle
  • Benjamin Peterson at Aug 30, 2010 at 10:27 pm

    John Nagle <nagle <at> animats.com> writes:
    "sqlite" has reasonably good SELECT performance on simple indices,
    but anything beyond that isn't all that great. Multiple processes
    updating the same sqlite database will have terrible performance,
    because the locking mechanism not only locks the entire table,
    the wait mechanism is a timed retry.
    This is been improved significantly in recent versions with support for write
    ahead logging and different lock levels to prevent writer starvation.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouppython-list @
categoriespython
postedAug 30, '10 at 5:07p
activeAug 30, '10 at 10:27p
posts7
users3
websitepython.org

People

Translate

site design / logo © 2022 Grokbase