Running this below will deadlock. As it is explained
because the get() method will take a lock and will call the count() method which will also take a lock before the set() method unlocks()
It’s actualy not too clear, why if one method takes a lock on an object and this method call other method which also takes a lock on the same object then this leads to deadblock. What is the mechanism? And what is bad with count() method which will also take a lock before the set() method unlocks().
Even if count() method will also take a lock then it will also release the lock before set() finished. So why those two methods cann’t deal with it?
Hm, that’s true, but the idea of my question is slightly different.
As I understand it the second method count() cannot get access to ds datastore because it’s already locked by mutex used in get method. That’s why it’s unclear that the author meant when he said:
Is it possible for count() take one more lock if ds has already been locked?
In your particular case having one mutex I think you must unlock the resource (map) before access it from other method. I guess that is not a good idea to have nested locks on the same mutex.
RWMutex is not a solution here. From the documentation:
It should not be used for recursive read locking; a blocked Lock call excludes new readers from acquiring the lock. See the documentation on the RWMutex type.
Edit:
Oh, no. It was my misreading. Yeah, you’re right about this case, sorry.
I agree how recursive RWMutex can cause issues in general. Now retired, I am implementing a clean-room version of UniVerse, a Pick Model database. In a form of join, the Pick TFile - and Prime Information TRANS() constructs can read a table row (and column) from a primary key. With rows stored in hash tables, an RWMutex is held on the hash bucket. So, you need to nest the RWMutex so when the secondary read terminates the lock must be maintained. My solution was to track the requests (by table and hash bucket) and release the RWMutex when the count dropped to zero.
My current problem is eash user is a separate goroutine. If some user code creates an infinite loop, how to terminate that goroutine, or have it break out of “some loop”. If it is “looping” it would not loop through a select to see the context cancel.
I agree with @j-forster for your particular case but if you want to extend your code with other methods which could also create recursive locks perhaps you should use other methods like wrapping standard lock/unlock and put a semaphore to avoid locking the map if you already done this… or whatever.