• WSLarryEngles

    WSLarryEngles

    @wslarryengles

    Viewing 15 replies - 31 through 45 (of 59 total)
    Author
    Replies
    • in reply to: Search all tables #1229394

      I think that was my possible solution from earlier in the thread.

      I should have read more closely! I agree tho, not a nice problem.

    • in reply to: Search all tables #1229309

      The obvious thing (alluded to in previous replies) seems to be poor database design. A separate table per year and a separate table for the addresses too? Should all be one table with customerDate or customerYear as a field. Too bad you are stuck with it.

      Anyway, just thought I’d add my two cents about another way to approach the problem. You can do this without VBA (although I’d probably use it anyway). Create 70 queries (yuk!) that are something like:

      Select distinct “FY10Customers” as TableName, CustomerName from FY10Customers;”

      Then create a union query for these (one would hope for fewer) 70 tables. You should be able to do a query on this query to get table name and customer name.

    • in reply to: Aggregate User Function #1224130

      Of course that will work, that’s what the error message says. The point is: I only want to group by Project not by Project and mySum.

      Why don’t you just build a query and test it. Then you can look at the SQL for the query and put it in code, if that is your objective.

      I must admit, the problem and objective are not clear to me.

    • in reply to: Aggregate User Function #1224084

      A corrected syntax will work. Try something like:

      SELECT ProjName,mySum(Amount) as myNumber FROM tblProj GROUP BY ProjName, mySum(Amount);

      Worked for me.

    • in reply to: Importing into Access from Excel #1224014

      It is probably not the spaces. You have two columns with the same name. When I put a 2 after one of them, I was able to import into Access 2007 just fine. I did notice a lot of leading blanks in row 1, so you might want to take a look at that. I did remove the leading blanks, but really think the duplicate heading was the problem.

      I hope this is a one time thing for a report and not the start of an Access database. It jumps out at me that you will end up with 4 sets of fields to designate four facilities (?). In a database you should have one set with 4 rows. But that is a separate subject.

    • in reply to: Access report design extremely slow #1224013

      I’ve run into this sort of situation a few times. There are almost always ways to dramatically improve speeds, but you might have to be willing to experiment a bit.

      First, check that your table is indexed properly for the selection criteria. I suspect that is probably already the case, but it is worth mentioning.
      Second, experiment with some make tables. In the past, I have made a 2nd table based on the first queries results, then run crosstabs on that. If you make the temporary table, then you might be able to put in indexes to speed the crosstab up. However, the index building takes its own time too. Just try one thing, then another. If your backend is a linked table, then have the temporary table local (ie in the front-end).

      If that should prove to be lots faster, then you can implement the sequence in code: 1) Clear the temporary table, 2) run query to populate the table. This code could be in the report open event.

      With regard to being slow during design time, why not use a make table to hold the results of the crosstab? Then just use that table during design time as your record source, but switch to the crosstab for production.

      Just a few thoughts. Hopefully one of them will help.

    • in reply to: Access 2010 auto insert from field above issue #1223107

      I cannot reproduce this with Access 2007 with Access 2003 format. It should take code to have this sort of behavior. That, we all could help you with.

      Maybe it is one of those “south of the equator” things.

    • in reply to: Access 2010 auto insert from field above issue #1223068

      Excel involved – I’m not surprised. I am surprised that this happens with entries in Access. In fact, I am going to ask for a demonstration .

      In Access, open the table (no query, no form). Enter something into any other field other than the one in question (let’s call the field in question Field 1). Show us a snapshot of Field 1 before and after the entry into the other field. Then commit the new record (by positioning to a different record). Make a snapshot of Field 1 again.

      Post those and we’ll go from there.

    • in reply to: Passing an array to OpenArgs #1223052

      John is completely correct in that OpenArgs is a string. You might want to consider making the array public and then passing information via the OpenArgs to tell the form to go look at the public array. Might be lots easier than creating the string, then parsing it again on the other side. Just a thought.

    • in reply to: Access 2010 auto insert from field above issue #1223051

      Can you provide further details? I also have been an Access user (and developer) since day 1 of Access 1.0 and am unaware of any automatic behavior like this. It does do this in Excel, but not in Access that I’ve ever noticed (maybe I’m not nearly as observant as I thought ) . If you are using an autonumber, you can’t enter it at all.

    • in reply to: Need something faster than DLookup #1221767

      A very interesting result – not exactly what I had expected, but good anyway! I set up random lookups (there was a unique key on the table and I searched on it – 4 fields). I set up a random lookup (via separate tables which held only parts of the unique key). Then I used three different methods (and 3 completely separate runs, each using only one method of lookup). Here are the results (ms = milliseconds): There were over 1.6 million rows in an attached table. Since I was doing all this work anyway, I decided to at least look at the .findfirst method on an open recordset. The results on that were even worse than I expected.

      FindFirst method:

        [*]Number Lookups: 10
        [*]Min Ms: 655
        [*]Max ms: 4368
        [*]Avg ms: 2128

      Query method:

        [*]Number Lookups: 1000
        [*]Min ms: 0
        [*]Max ms: 46
        [*]Avg ms: 4

      Seek method:

        [*]Number Lookups: 1000
        [*]Min ms: 0
        [*]Max ms: 15
        [*]Avg ms: .05

      I must admit I’m surprised by how much faster the seek is. It is definitely more complicated to use, but would be worth using if one were looking up thousands of times or more (for 1000 lookups, the total time is 4 seconds vs .05 seconds). I think I’ve only used a seek one time myself.

      From what I can tell on the original problem description, I’d guess that the user will not be able to tell the difference between a query implementation and a .seek implementation. My inclination would be to implement with a query and see how it goes. If it is too slow, then move to a .seek and see if it improves response any, but if you have a slow network, I’m not sure you’ll see much improvement. I just don’t know how a slow network affects the response times (well, we all know it does slow it down ). It could be that the lookup time is just in the noise.

    • in reply to: Need something faster than DLookup #1221695

      Larry – I would be interested in a comparison between the seek and the regular query (since it sounds to me that you’d like to test that! ) I have a lot of work to do with datasets in this project, so I want to optimize my interaction with the data. The data has to be split for security reasons – I just wish I had a faster network.

      I’ll try to get to this in the next day. I’m pretty sure I can.

    • in reply to: Need something faster than DLookup #1221644

      Thanks for everyone’s feedback. I don’t have 4 million records – more like 500,000 (and growing) – but I don’t have a “robust server”. This is a front-end/back-end situation, but I’m only using Access tables as my backend not SQL server, so I’m hampered by speed this way and the fact that the network is slow.

      I still think you would be best served by a simple query in code (e.g. Select * from tablename WHERE criteria here; Remember, the jet engine is optimized for queries.

      I just ran a test against a 1.6GB table with 1.5+ million rows. The table is a linked Access table. With a query, selecting a specific row (against an indexed field) took well under 1 second. The linked table is on the same PC, so it is not slowed down by network access – which can be a significant slowdown. A slow network is even more reason to select a single row (or the minimum number of rows you need). I suspect (without testing) that having an open recordset that you use seeks on would be slower. I could be mistaken on that, but don’t think so. I’d be interested in a comparison. Come to think of it, I might do that and let you know if you are interested. Let me know. It would not take long to test.

      Let us know how it goes.

    • in reply to: Need something faster than DLookup #1221091

      How big is “huge”? How many records in the archived table?

      Why not just write a select statement for a recordset that gets only the specific record?

    • in reply to: Hangs on Apply Filter #1221087

      Try the code me.filteron = false instead of clearing the filter.

      If you can’t break with control-break, then put in a breakpoint and step through to see what is being done.

    Viewing 15 replies - 31 through 45 (of 59 total)