X-CGP-ClamAV-Result: CLEAN X-VirusScanner: Niversoft's CGPClamav Helper v1.22.2a (ClamAV engine v0.102.2) X-Junk-Score: 0 [] X-KAS-Score: 0 [] From: "Jesse Tayler" Received: from mail-qk1-f171.google.com ([209.85.222.171] verified) by selbstdenker.ag (CommuniGate Pro SMTP 6.3.3) with ESMTPS id 26008152 for webobjects-dev@wocommunity.org; Fri, 11 Jun 2021 17:19:10 +0200 Received-SPF: none receiver=post.selbstdenker.com; client-ip=209.85.222.171; envelope-from=jtayler@oeinc.com Received: by mail-qk1-f171.google.com with SMTP id j184so31297415qkd.6 for ; Fri, 11 Jun 2021 08:19:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oeinc-com.20150623.gappssmtp.com; s=20150623; h=from:mime-version:subject:date:references:to:in-reply-to:message-id; bh=uROBbhL7X3YtLeU+SUJiTc5ZvWCGo42mCOSJq8UBgIA=; b=ib7K4EQKWIYZpfo3DUWoItUKNwupIe4kbMwbXj61WZVNdUGjuANTl5ex566cs1FjZF hoc87X91Xf1UQyidP+9tUxAbKxK/r6DV1KhMmQas6I/emPQplr43ven34uMZVPOJ36pQ LNgT/+hoRa59xdPEuceVoIgs8ilsq67H9UJU9AGI7l7nbLsxo71JwMuHGlX9XCl60lwR 0Bj84nu9siZBDpdAVLKk4sgXrJTfendHEpJKBFAORDIChPJqcKQG4GFy6PB2Ye2gULn8 KAFYe0L0rJzNJm72eisI4YlKHYySU1SmhlhadQI/8zZ4pj/Oitli92kuXZXQ6Qf2BdsH c4Bg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:mime-version:subject:date:references:to :in-reply-to:message-id; bh=uROBbhL7X3YtLeU+SUJiTc5ZvWCGo42mCOSJq8UBgIA=; b=kev/nlmJ5HyfYpqsXbqmr8Z+O88PlinnN4m/37oxzcocP40jizvAxTfxAxn9YfU1xr Fw6Q8mgzFKDTvzMANV+UyzsM+fX6QZ24C4aSeQ1iHA+zXozKAs72p3jkn20RVGSJkQg5 WnCJ2a+IhVovMWF39mTleo2zf5Hs01AbJ2O5NQ4Ta4hU3/8yDjz1qT8GAiJgTkD/bRqE rseCjUlymSUsLvjoy5vsM8VvoXF2nWt0GEao+6cffYhnM02vAVUFVfeN0KemgVLmpMxr 6d36BAlcBDtz1QGBTKjXTReerLPPyhyl6naVtbU4HDYxuWB0XmhQtR4D5QH9DugwakZ+ esdw== X-Gm-Message-State: AOAM532J2stNBu5AJNY+NHZIpXlY/SEHj/IVylZH6SL2U/HqAzdreAjZ OdgrEbL8laBLzP2Amh3S+RGLBL6z6V10ER/V X-Google-Smtp-Source: ABdhPJzcpTafz8W5C7aNjSzgk2RYelKFYVS2ujUrHo3sWQ2Jtq24U5xPqgdi2kl6A/bb+nXFajbrdw== X-Received: by 2002:a37:2cc3:: with SMTP id s186mr4299269qkh.330.1623424728294; Fri, 11 Jun 2021 08:18:48 -0700 (PDT) Return-Path: Received: from [192.168.1.16] ([69.169.5.176]) by smtp.gmail.com with ESMTPSA id x20sm4346487qta.91.2021.06.11.08.18.47 for (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 11 Jun 2021 08:18:47 -0700 (PDT) Content-Type: multipart/alternative; boundary="Apple-Mail=_4969E4F3-21CF-48DC-A620-0EA4AE035198" Mime-Version: 1.0 (Mac OS X Mail 14.0 \(3654.60.0.2.2\)) Subject: Re: [WO-DEV] Cannot determine primary key for entity Date: Fri, 11 Jun 2021 11:18:47 -0400 References: To: WebObjects & WOnder Development In-Reply-To: Message-Id: <46A193AB-C720-45B7-A23D-C27AB302E771@oeinc.com> X-Mailer: Apple Mail (2.3654.60.0.2.2) --Apple-Mail=_4969E4F3-21CF-48DC-A620-0EA4AE035198 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=utf-8 Did you dump the database=E2=80=99s own logic tree output? Sometimes you = can see the points where it decides on using an index or whatever and = perhaps a failure is either visible or there=E2=80=99s a threshold = limitation below EOF I doubt EOF has the awareness to do other than simply backtrace like = this, so I just wonder if there are lower level reports you can review > On Jun 11, 2021, at 11:08 AM, OCsite = wrote: >=20 > P.P.S. It actually does look like the weird cancelled fetch was = somehow affected by the background task; upon further investigation of = the case >=20 >> On 11. 6. 2021, at 15:20, OCsite > wrote: >> =3D=3D=3D >> 15:05:48.528 DEBUG =3D=3D=3D Begin Internal Transaction = //log:NSLog [WorkerThread5] >> 15:05:48.528 DEBUG evaluateExpression: = //log:NSLog [WorkerThread5] >> 15:05:49.937 DEBUG fetch canceled //log:NSLog [WorkerThread5] >> 15:05:49.937 DEBUG 164 row(s) processed //log:NSLog = [WorkerThread5] >> 15:05:49.941 DEBUG =3D=3D=3D Commit Internal Transaction = //log:NSLog [WorkerThread5] >> 15:05:49.941 INFO Database Exception occured: = java.lang.IllegalArgumentException: Cannot determine primary key for = entity DBRecord from row: {... uid =3D = ; ... } = //log:er.transaction.adaptor.Exceptions [WorkerThread5] >> =3D=3D=3D >=20 > I've found that concurrently a background fetch did run, and = essentially at the same time =E2=80=94 just couple of hundreds of second = later =E2=80=94 was cancelled, too: >=20 > =3D=3D=3D > 15:05:47.355 DEBUG =3D=3D=3D Begin Internal Transaction = //log:NSLog [MainPageSlaveRowsCountThread_Ciz=C3=AD n=C3=A1kupy EB] > 15:05:47.355 DEBUG evaluateExpression: = = //log:NSLog [MainPageSlaveRowsCountThread_Ciz=C3=AD n=C3=A1kupy EB] > 15:05:49.973 DEBUG fetch canceled //log:NSLog = [MainPageSlaveRowsCountThread_Ciz=C3=AD n=C3=A1kupy EB] > 15:05:49.975 DEBUG 1 row(s) processed //log:NSLog = [MainPageSlaveRowsCountThread_Ciz=C3=AD n=C3=A1kupy EB] > 15:05:49.983 DEBUG =3D=3D=3D Commit Internal Transaction = //log:NSLog [MainPageSlaveRowsCountThread_Ciz=C3=AD n=C3=A1kupy EB] > =3D=3D=3D >=20 > Note it runs over a different OSC, but still =E2=80=94 might this be = the culprit? Would EOF somehow cancel a fetch if two of them happen in = the same moment, albeit both of them happen in different ECs over a = different OSC? >=20 > I do not lock here, for there is absolutely no danger more threads = would use the same EC concurrently (though more different background = threads could use concurrently ECs over same OSC). I thought it is all = right in this case. Is it not? Should I try to lock those ECs or even = the OSC? >=20 > Thanks, > OC >=20 >>=20 >> Any idea what might go wrong and how to fix it? Thanks! >> OC >>=20 >>> On 11. 6. 2021, at 13:37, OCsite > wrote: >>>=20 >>> Hi there, >>>=20 >>> just bumped into another weird EOF case. A pretty plain fetch caused = a =E2=80=9CCannot determine primary key for entity=E2=80=9D exception. = The row contains a number of columns whose values makes sense, some = null, some non-null, with one exception =E2=80=94 the primary key, = modelled as an attribute uid, is indeed a null, thus the exception makes = a perfect sense. >>>=20 >>> How can this happen? >>>=20 >>> =3D=3D=3D >>> IllegalArgumentException: Cannot determine primary key for entity = DBRecord from row: {... uid =3D = ; ... } >>> at = com.webobjects.eoaccess.EODatabaseChannel._fetchObject(EODatabaseChannel.j= ava:348) >>> ... skipped 2 stack elements >>> at = com.webobjects.eocontrol.EOObjectStoreCoordinator.objectsWithFetchSpecific= ation(EOObjectStoreCoordinator.java:488) >>> at = com.webobjects.eocontrol.EOEditingContext.objectsWithFetchSpecification(EO= EditingContext.java:4069) >>> at = er.extensions.eof.ERXEC.objectsWithFetchSpecification(ERXEC.java:1215) >>> ... skipped 1 stack elements >>> at = com.webobjects.eocontrol.EOObjectStoreCoordinator.objectsForSourceGlobalID= (EOObjectStoreCoordinator.java:634) >>> at = com.webobjects.eocontrol.EOEditingContext.objectsForSourceGlobalID(EOEditi= ngContext.java:3923) >>> at = er.extensions.eof.ERXEC.objectsForSourceGlobalID(ERXEC.java:1178) >>> ... skipped 1 stack elements >>> at = com.webobjects.eoaccess.EOAccessArrayFaultHandler.completeInitializationOf= Object(EOAccessArrayFaultHandler.java:77) >>> at = com.webobjects.eocontrol._EOCheapCopyMutableArray.willRead(_EOCheapCopyMut= ableArray.java:45) >>> at = com.webobjects.eocontrol._EOCheapCopyMutableArray.count(_EOCheapCopyMutabl= eArray.java:103) >>> at com.webobjects.foundation.NSArray.isEmpty(NSArray.java:1888) >>> ... >>> =3D=3D=3D >>>=20 >>> Just in case it happens to be important (I believe it is not), the = problem happens at row >>>=20 >>> ... =3Deolist.representedObject.records().isEmpty()?...:... >>>=20 >>> where records just returns storedValueForKey('records'), = self-evidently a fault, which fires to fetch the rows. >>>=20 >>> Searching the Web, all I've found is this = (linked from = here = ), which does not really help :) Truth is, some = background threads do run at the moment; they are comparatively plain = though and I can't see why they should cause the problem for the R/R = thread. All they do is to >>>=20 >>> 1. get their own OSC from the pool, making sure they never get the = same OSC normal sessions have >>> 2. create a new ERXEC in this OSC >>> 3. get a local instance of an object in the EC >>>=20 >>> =3D=3D=3D this is the code of the background thread; a number of = those runs: >>> def store >>> for (def pool=3DERXObjectStoreCoordinatorPool._pool();;) { >>> store=3Dpool.nextObjectStore >>> if (store!=3D_sessionosc) break // there's one OSC for = all sessions, stored in _sessionosc >>> } >>> return = eo.localInstanceIn(ERXEC.newEditingContext(store)).numberOfMasterRowsWitho= utOwner() >>> =3D=3D=3D >>>=20 >>> and the method simply fetches: >>>=20 >>> =3D=3D=3D >>> int numberOfMasterRowsWithoutOwner { >>> def = mymasterrow=3DEOQualifier.qualifierWithQualifierFormat("importObject.dataB= lock =3D %@ AND recordOwner =3D NULL",[this] as NSA) >>> return = ERXEOControlUtilities.objectCountWithQualifier(this.editingContext, = 'DBRecord', mymasterrow) >>> } >>> =3D=3D=3D >>>=20 >>> Most time it works properly. Occasionally =E2=80=94 rather rarely = =E2=80=94 the problem above happens. Can you see what am I doing wrong? >>>=20 >>> Thanks a lot, >>> OC >>>=20 >>>=20 >>=20 >=20 --Apple-Mail=_4969E4F3-21CF-48DC-A620-0EA4AE035198 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=utf-8 Did = you dump the database=E2=80=99s own logic tree output? Sometimes you can = see the points where it decides on using an index or whatever and = perhaps a failure is either visible or there=E2=80=99s a threshold = limitation below EOF

I= doubt EOF has the awareness to do other than simply backtrace like = this, so I just wonder if there are lower level reports you can = review

On Jun 11, 2021, at 11:08 AM, OCsite <webobjects-dev@wocommunity.org> wrote:

P.P.S. It actually does look like the weird cancelled fetch was somehow affected by the background task; upon further = investigation of the case

On 11. = 6. 2021, at 15:20, OCsite <webobjects-dev@wocommunity.org> wrote:
=3D=3D=3D
15:05:48.528 = DEBUG  =3D=3D=3D Begin Internal Transaction       = //log:NSLog [WorkerThread5]
15:05:48.528 DEBUG  evaluateExpression: = <com.webobjects.jdbcadaptor.FrontbasePlugIn$FrontbaseExpression: = "SELECT ... t0."C_UID", ... FROM "T_RECORD" t0 WHERE t0."C_IMPORT_ID" =3D = 1003547" withBindings: >       //log:NSLog = [WorkerThread5]
15:05:49.937 DEBUG fetch canceled     =   //log:NSLog [WorkerThread5]
15:05:49.937 DEBUG 164 row(s) = processed       //log:NSLog = [WorkerThread5]
15:05:49.941 = DEBUG  =3D=3D=3D Commit Internal Transaction       = //log:NSLog [WorkerThread5]
15:05:49.941 INFO  Database Exception occured: = java.lang.IllegalArgumentException: Cannot determine primary key for = entity DBRecord from row: {... uid =3D = <com.webobjects.foundation.NSKeyValueCoding$Null>; ... }   =     //log:er.transaction.adaptor.Exceptions = [WorkerThread5]
=3D=3D=3D

I've found that concurrently a = background fetch did run, and essentially at the same time =E2=80=94 = just couple of hundreds of second later =E2=80=94 was cancelled, = too:

=3D=3D=3D
15:05:47.355 DEBUG  =3D=3D=3D Begin = Internal Transaction       //log:NSLog = [MainPageSlaveRowsCountThread_Ciz=C3=AD n=C3=A1kupy EB]
15:05:47.355 DEBUG  evaluateExpression: = <com.webobjects.jdbcadaptor.FrontbasePlugIn$FrontbaseExpression: = "SELECT count(*) FROM "T_RECORD" t0, "T_IMPORT" T1, "T_IMPORT" T3, = "T_RECORD" T2 WHERE (t0."C_OWNER__ID" is NULL AND T3."C_DATA_BLOCK_ID" =3D= 1000387) AND t0."C_IMPORT_ID" =3D T1."C_UID" AND T2."C_IMPORT_ID" =3D = T3."C_UID" AND T1."C_OWNER_RECORD_ID" =3D T2."C_UID"" withBindings: > =       //log:NSLog [MainPageSlaveRowsCountThread_Ciz=C3=AD = n=C3=A1kupy EB]
15:05:49.973 DEBUG fetch canceled     =   //log:NSLog [MainPageSlaveRowsCountThread_Ciz=C3=AD n=C3=A1kupy = EB]
15:05:49.975 DEBUG 1 row(s) processed       = //log:NSLog [MainPageSlaveRowsCountThread_Ciz=C3=AD n=C3=A1kupy = EB]
15:05:49.983 DEBUG  =3D=3D=3D Commit Internal = Transaction       //log:NSLog = [MainPageSlaveRowsCountThread_Ciz=C3=AD n=C3=A1kupy = EB]
=3D=3D=3D

Note it runs over a different OSC, but = still =E2=80=94 might this be the culprit? Would EOF somehow cancel a = fetch if two of them happen in the same moment, albeit both of them = happen in different ECs over a different OSC?

I do not lock here, for there is = absolutely no danger more threads would use the same EC concurrently = (though more different background threads could use concurrently ECs = over same OSC). I thought it is all right in this case. Is it not? = Should I try to lock those ECs or even the OSC?

Thanks,
OC


Any idea what might go wrong and how to fix it? = Thanks!
OC

On 11. = 6. 2021, at 13:37, OCsite <webobjects-dev@wocommunity.org> wrote:

Hi = there,

just = bumped into another weird EOF case. A pretty plain fetch caused a = =E2=80=9CCannot determine primary key for entity=E2=80=9D exception. The = row contains a number of columns whose values makes sense, some null, = some non-null, with one exception =E2=80=94 the primary key, modelled as = an attribute uid, is indeed a null, thus the exception makes a perfect = sense.

How can this = happen?

=3D=3D=3D
IllegalArgumentException: = Cannot determine primary key for entity DBRecord from row: {... uid =3D = <com.webobjects.foundation.NSKeyValueCoding$Null>; ... = }
  at = com.webobjects.eoaccess.EODatabaseChannel._fetchObject(EODatabaseChannel.j= ava:348)
    =  ... skipped 2 stack elements
  at = com.webobjects.eocontrol.EOObjectStoreCoordinator.objectsWithFetchSpecific= ation(EOObjectStoreCoordinator.java:488)
  at = com.webobjects.eocontrol.EOEditingContext.objectsWithFetchSpecification(EO= EditingContext.java:4069)
  at = er.extensions.eof.ERXEC.objectsWithFetchSpecification(ERXEC.java:1215)
     ... skipped = 1 stack elements
  at = com.webobjects.eocontrol.EOObjectStoreCoordinator.objectsForSourceGlobalID= (EOObjectStoreCoordinator.java:634)
  at = com.webobjects.eocontrol.EOEditingContext.objectsForSourceGlobalID(EOEditi= ngContext.java:3923)
  at = er.extensions.eof.ERXEC.objectsForSourceGlobalID(ERXEC.java:1178)
     ... skipped = 1 stack elements
  at = com.webobjects.eoaccess.EOAccessArrayFaultHandler.completeInitializationOf= Object(EOAccessArrayFaultHandler.java:77)
  at = com.webobjects.eocontrol._EOCheapCopyMutableArray.willRead(_EOCheapCopyMut= ableArray.java:45)
  at = com.webobjects.eocontrol._EOCheapCopyMutableArray.count(_EOCheapCopyMutabl= eArray.java:103)
  at = com.webobjects.foundation.NSArray.isEmpty(NSArray.java:1888)=
...
=3D=3D=3D

Just in case it happens to be important (I = believe it is not), the problem happens at row

  =       ... = =3Deolist.representedObject.records().isEmpty()?...:...

where records = just returns storedValueForKey('records'), = self-evidently a fault, which fires to fetch the rows.

Searching = the Web, all I've found is this (linked from here), which does not really = help :) Truth is, some background threads do run at = the moment; they are comparatively plain though and I can't see why they = should cause the problem for the R/R thread. All they do is to

1. get their own OSC = from the pool, making sure they never get = the same OSC normal sessions have
2. = create a new ERXEC in this OSC
3. get a local = instance of an object in the EC

=3D=3D=3D this is the code of the = background thread; a number of those runs:
        def store
    =     for = (def = pool=3DERXObjectStoreCoordinatorPool._pool();;) {
            = store=3Dpool.nextObjectStore
        =     if (store!=3D_sessionosc) = break // there's one OSC for all sessions, = stored in _sessionosc
      =   }
      =   return eo.localInstanceIn(ERXEC.newEditingContext(s= tore)).numberOfMasterRowsWithoutOwner()
=3D=3D=3D

and the method simply = fetches:

=3D=3D=3D
  =   int = numberOfMasterRowsWithoutOwner {
      =   def = mymasterrow=3DEOQualifier.qualifierWithQualifierFormat("importObject.dataBlock =3D = %@ AND recordOwner =3D NULL",[this] as NSA)
        return ERXEOControlUtilities.objectCountWithQualifier(this.editingContext, 'DBRecord', mymasterrow)
    }
=3D=3D=3D

Most time it works = properly. Occasionally =E2=80=94 rather rarely =E2=80=94 the problem = above happens. Can you see what am I doing wrong?
Thanks a lot,
OC





= --Apple-Mail=_4969E4F3-21CF-48DC-A620-0EA4AE035198--