• Architecture Question (Access 97)

    Author
    Topic
    #367958

    I am developing a small application to track contacts with a group of project developers (approx 250) including tombstone data and records of telephone calls and correspondence as well as their staff members (~350), the status of the projects themselves (~150 submissions to date), and the status of the contracts arising out of successful proposals (~50). As well there are reminder items and insurance policy tables and so on. We have 6 users who will likely have the database open but inactive through most of the day.

    I have split the database into a front end and a back end, and until today I have been distributing the front end by e-mail and keeping the back end on a network share available to all users, and this has worked to date. The back end has been replicated and the users tables are linked to the replica. If continued development requires a change in the table layouts or new tables I make the changes in the design master, synchronize it to the replica on the network drive, and distribute a new front-end if required.

    I do most of my development work on a front end linked top a copy of the design master that I have placed on my local drive. Today I noticed a dramatic difference in performance between the local and network back end when I linked to the network file. It was horrible – and I think the users are going to feel the same way. If they don’t like it they won’t use it – and that defeats the whole point.

    I am considering replicating a local back end onto each user’s hard drive, and including some ontimer procedures to check the time every -say- 300 seconds and if it is more than 15 minutes since a synchronization with the network back end, forcing a sychronization through JRO. Does anyone have any experience with that sort of set up? Good, Bad, or Ugly?

    Any enlightenment appreciated.

    Viewing 0 reply threads
    Author
    Replies
    • #575331

      I don’t quite understand what replication has to do with this. If the back end is on the network, then why is it replicated? If you replicated the back end and placed a copy of it on each machine, I could understand it. Otherwise, there doesn’t seem to be much purpose in replication. shrug

      Syncing the replicas every 5 minutes is going to result in a lot of network traffic, so you might want to give some thought to whether partial replicas might be a viable option. Does everyone really need to see all the information all the time? If not, then partial replicas might help. If the information is inactive most of the day, then why do you need to sync it so frequently? You could always give the use a button to sync the backend on demand so that it would be done only when they needed it, not constantly.

      Actually, your best option would be to upgrade to either Access 2000 or XP and use a SQL Server/MSDE backend.

      • #575557

        Charlotte
        “If you replicated the back end and placed a copy of it on each machine, I could understand it.”

        Obviously my explanation was not very clear, since that is what I am thinking of doing: placing a replicated back-end on each user machine which they would use as a data source, and forcing synchronization with a network “master” periodically. The existing back end is replicated because that let me change table layouts and introduce new tables during a period when users were inputting lots of data. In retrospect, that replication was probably not needed – but it will live on like the human appendix. Linking to the back end on a network drive is getting to be slow – I will have to do more work to determine whether this is a consistent problem or I just hit it at a bad moment

        The question now is whether to distribute replicas to user machines and if so, how to manage the synchronization. I could allow users to synchronize at their convenience by way of a command button or force a synchronization before printing or running certain queries – but I am worried that the user who has not synchronized may have input relevant information.

        I am considering attaching an on-timer event to each form in the database that would check periodically whether the last synchronization had occurred less than a given period of time previously (I would have to create a local table to record the time of the last synch). If the last synch has been -say- more than an hour (or 20 minutes, or ???) previously it would write a new record in the ‘lastsynch’ table and then perform a synchronization. Because the forms will open and close during use of the database, I was thinking that the interval between testing the time from that last synch would have to be fairly short, or it would never be tested before the form closed (I assume that opening the form re-sets the timer). If it is possible to attach the on-timer event to the database as a whole rather than to particular forms then the arrangement can be simplified; rather than recording the last synch and testing the elapsed time the routine can simply synchronize every -say- 20 or 30 minutes.

        • #575667

          The database does not have a timer event, only forms do. I would be cautious about changing table while a lot of data was being entered. One of the more annoying problems with replication is what happens when both data and design get change dat the same time. If you have developed the habit of modifying the design master while people are using the replicas, get out of it. You’ll have to change your approach anyhow when you move to Access 2000 and later.

    Viewing 0 reply threads
    Reply To: Architecture Question (Access 97)

    You can use BBCodes to format your content.
    Your account can't use all available BBCodes, they will be stripped before saving.

    Your information: