X-CGP-ClamAV-Result: CLEAN X-VirusScanner: Niversoft's CGPClamav Helper v1.22.2a (ClamAV engine v0.102.2) X-Junk-Score: 0 [] X-KAS-Score: 0 [] From: "Matthew Ness" Received: from out3-smtp.messagingengine.com ([66.111.4.27] verified) by post.selbstdenker.com (CommuniGate Pro SMTP 6.3.3) with ESMTPS id 26028539 for webobjects-dev@wocommunity.org; Wed, 16 Jun 2021 02:58:10 +0200 Received-SPF: none receiver=post.selbstdenker.com; client-ip=66.111.4.27; envelope-from=matt@logicsquad.net Received: from compute2.internal (compute2.nyi.internal [10.202.2.42]) by mailout.nyi.internal (Postfix) with ESMTP id BA8735C0126 for ; Tue, 15 Jun 2021 20:57:50 -0400 (EDT) Received: from imap10 ([10.202.2.60]) by compute2.internal (MEProxy); Tue, 15 Jun 2021 20:57:50 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; bh=zERzge LiiIOgJ61xhJ6iIAX3sgbxon4W9OT7VGc154k=; b=uloz+DOhZdiKM0dVbUz8ZZ t3FxK2U/1DTmxrTrbeflOuB9p2qDR02QOz/FObjYLIImd3RrhnWrm2i9BjYyOjMk SAIOKKEmjuImQzzP1bZACHWl1+gSnMLoIUzaKn0Upz4au3xMyX0HHIOGVB4Mpvbe 5C3inJxiIAzE2UtVq6ytsIou9IJ7Jb0fMfPeHkOGsmLQpdGbcUJ27LReJHpp8aBn YQBgZbXcLfo/nn9KwKQR/LDS7hg0/6LnP6Q/vagrJ0obmBQIyuBfosuiXbJvJNnD kmFkIQ8mplSl9dVZ5XuYwJnBgyzxliOaeKdbKWQrIUS57lNf4pxGiJ4EpuCPsRqQ == X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrfedvkedgfeelucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefofgggkfgjfhffhffvufgtsegrtd erreerreejnecuhfhrohhmpedfofgrthhthhgvficupfgvshhsfdcuoehmrghttheslhho ghhitghsqhhurggurdhnvghtqeenucggtffrrghtthgvrhhnpeevleelvddvvdfhvdffie eigfffkeevgffhvdetleeguedtueffvdelhedvleegjeenucffohhmrghinheplhhoghhi tghsqhhurggurdhnvghtpdhlihhnkhgvughinhdrtghomhenucevlhhushhtvghrufhiii gvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghttheslhhoghhitghsqhhurggu rdhnvght X-ME-Proxy: Received: by mailuser.nyi.internal (Postfix, from userid 501) id 467874E00FD; Tue, 15 Jun 2021 20:57:50 -0400 (EDT) X-Mailer: MessagingEngine.com Webmail Interface User-Agent: Cyrus-JMAP/3.5.0-alpha0-519-g27a961944e-fm-20210531.001-g27a96194 Mime-Version: 1.0 Message-Id: <8ad57d9d-a410-4d20-8b7d-5bdabb898e9d@www.fastmail.com> In-Reply-To: References: Date: Wed, 16 Jun 2021 10:57:15 +1000 To: wo-dev Subject: =?UTF-8?Q?Re:_[WO-DEV]_old/invalid_:1_relationship_value,_how_the_H.__po?= =?UTF-8?Q?ssible=3F!=3F?= Content-Type: multipart/alternative; boundary=c09ef0bbd6a640139cc6acddfb206ba3 --c09ef0bbd6a640139cc6acddfb206ba3 Content-Type: text/plain;charset=utf-8 Content-Transfer-Encoding: quoted-printable On Wed, Jun 16, 2021, at 1:42 AM, OCsite wrote: > Hi there, >=20 > today we have bumped into a weird behaviour which I can't grok. We got= an old/invalid value for a :1 relationship to an important object (impo= rtant enough we even added its FK to the modelled locked attributes asid= e of the PK; not that it helped any, but thanks to that, we see the curr= ent snapshot values in the database operations). >=20 > 1. user A stored new object into the relationship; since we log all da= tabase operations, we can see for his EC how this happened: >=20 > 11:00:12.995 UPDATE on 'DBAuction' ((uid =3D 1005602) and (lastValidPr= iceOfferID =3D 1061694)) -> {lastValidPriceOfferID:1061695} >=20 > The *saveChanges* operation did not fail and the appropriate R/R loop = did end without a glitch, no exception, no problem at all, at 11:00:13.0= 36. >=20 > 2. user B changed the object; again, we can see for his EC >=20 > 11:00:25.220 UPDATE on 'DBAuction' ((uid =3D 1005602) and (lastValidPr= iceOfferID =3D 1061695)) -> {lastValidPriceOfferID:1061698} >=20 > The *saveChanges* operation again finished successfully and the approp= riate R/R loop did end all right at 11:00:25.424. >=20 > 3. in a new R/R loop which started at 11:00:40.947, user A read the va= lue (the very standard way through *storedValueForKey('lastValidPriceOff= er')* =E2=80=94 which is the one we observe all the time, modelled as a = simple :1 relationship with FK *lastValidPriceOfferID*) =E2=80=94, and *= got the stale object PK:1061695*!=20 >=20 > How is it possible that the step 2 did not update values in user A's E= C? Incidentally, to make extra sure we do not get stale values even if R= /R threads happen to overlap, the code uses in the step 3 looks like thi= s: >=20 > synchronized (alock) { // lock for the DBAuction 1005602 object (c= onceptually on PK to lock out all users regardless their ECs) > try { > auction.editingContext().unlock() // so as to process rece= nt changes from other ECs (thanks, Chuck!) > auction.editingContext().lock() // of course we have to = keep it locked > DBPriceOffer wins=3Dauction.lastValidPriceOffer() // simpl= y calls storedValueForKey('lastValidPriceOffer') > println "... last valid $wins" >=20 > and this very log shows we have got the old price offer, the one with = PK:1061695. >=20 > I could understand this if user A changed the object to 1061695 and *d= id not save it* (in which case the EC sync would not touch the change; b= ut then it would be all right, if the change was not saved), but he very= definitely did save the 1061695 successfully in step 1, and never chang= ed it later (until he got it again =E2=80=94 the old/invalid value! =E2=80= =94 in step 3). >=20 > Can anyone see a scenario which would lead to him seeing still 1061695= in step 3, regardless that > - he did save it successfully at 11:00:12 and never changed it later; > - another user did change it to 1061698 and successfully did save at 1= 1:00:25; > - aside of the extra lock/unlock shown above, there is no manual EC lo= cking, ECs are auto-locked at R/Rs; > - and besides, even if the extra lock/unlock was not used, the R/R thr= eads here did not overlap, so the change *should* be synced all right, a= nd the user *should* get 1061698 in the step 3? >=20 > What do I overlook? >=20 > Thanks and all the best, > OC >=20 Hi, How many instances of the application aretypically running? Regards, --=20 Matt https://logicsquad.net https://www.linkedin.com/company/logic-squad/ --c09ef0bbd6a640139cc6acddfb206ba3 Content-Type: text/html;charset=utf-8 Content-Transfer-Encoding: quoted-printable

= On Wed, Jun 16, 2021, at 1:42 AM, OCsite wrote:
Hi there,
<= /div>

today we have bumpe= d into a weird behaviour which I can't grok. We got an old/invalid value= for a :1 relationship to an important object (important enough we even = added its FK to the modelled locked attributes aside of the PK; not that= it helped any, but thanks to that, we see the current snapshot values i= n the database operations).

1. user A stored new object into the relationship; since we= log all database operations, we can see for his EC how this happened:

11:00:12.995 UPDATE on 'DBAuction' ((uid =3D 1005602) and (lastVa= lidPriceOfferID =3D 1061694)) -> {lastValidPriceOfferID:1061695}

The saveChanges operation did not fail and th= e appropriate R/R loop did end without a glitch, no exception, no proble= m at all, at 11:00:13.036.

2. user B changed the object; again, we can see for his= EC

11:00:25.220 UPDATE on 'DBAuction' ((uid =3D 1005602) and (las= tValidPriceOfferID =3D 1061695)) -> {lastValidPriceOfferID:10616= 98}

The saveChanges = operation again finished successfully and the appropriate R/R loop did e= nd all right at 11:00:25.424.
3. in a new R/R loop which started at 11:= 00:40.947, user A read the value (the very standard way through storedValueForKey('lastValidPriceOffer') =E2=80=94 whi= ch is the one we observe all the time, modelled as a simple :1 relations= hip with FK lastValidPriceOfferID) =E2=80=94, and <= b class=3D"qt-">got the stale object PK:1061695! 

How is it possible that the = step 2 did not update values in user A's EC? Incidentally, to make extra= sure we do not get stale values even if R/R threads happen to overlap, = the code uses in the step 3 looks like this:

&= nbsp;   synchronized (alock) { // lock for = the DBAuction 1005602 object (concept= ually on PK to lock out all users regardless their ECs)
=
        try {
=
&nbs= p;           auction.editingContext().unlock() // so as to process recent c= hanges from other ECs (thanks, Chuck!)
      &= nbsp;     auction.editingContext().lock()   // of course we have t= o keep it locked
  &n= bsp;         DBPriceOffer wins=3Dauction.lastValidPr= iceOffer() // simply calls <= /span>storedValueForKey('lastValidPriceOffer')
            println "= ... last valid $wins"

and this very log shows we have got the old price offer, th= e one with PK:1061695.

I could understand this if user A changed the object to 1061695&nb= sp;and did not save it (in which case the EC sync w= ould not touch the change; but then it would be all right, if the change= was not saved), but he very definitely did save the 1061695 s= uccessfully in step 1, and never changed it later (until he got it again= =E2=80=94 the old/invalid value! =E2=80=94 in step 3).

Can anyone see a scenario which= would lead to him seeing still 1061695 in step 3, regardless that<= br>
- he did save it successfully at 11:00:12 an= d never changed it later;
- another user did= change it to 1061698 and successfully did save at 11:00:25;
- aside of the extra lock/unlock shown above,= there is no manual EC locking, ECs are auto-locked at R/Rs;
- and besides, even if the extra lock/unlock = was not used, the R/R threads here did not overlap, so the change should&n= bsp;be synced all right, and the user should=  get 1061698 in the step 3?
<= br>
What do I overlook?

Thanks and all the best,
OC


Hi,

How many ins= tances of the application aretypically running?

=
Regards,

-- <= br>
Matt
=

--c09ef0bbd6a640139cc6acddfb206ba3--