Mutex - why the code locks

Running this below will deadlock. As it is explained

because the get() method will take a lock and will call the count() method which will also take a lock before the set() method unlocks()

It’s actualy not too clear, why if one method takes a lock on an object and this method call other method which also takes a lock on the same object then this leads to deadblock. What is the mechanism? And what is bad with count() method which will also take a lock before the set() method unlocks().

Even if count() method will also take a lock then it will also release the lock before set() finished. So why those two methods cann’t deal with it?

package main

    import (
    	"fmt"
    	"sync"
    )

type DataStore struct {
	sync.Mutex
	cache      map[string]string
}

func New() *DataStore {
	return &DataStore{
		cache: make(map[string]string),
	}
}

func (ds *DataStore) set(key string, value string) {
	ds.Lock()
	defer ds.Unlock()
	ds.cache[key] = value
}

func (ds *DataStore) get(key string) string {
	ds.Lock()
	defer ds.Unlock()
	if ds.count() > 0 {
		item := ds.cache[key]
		return item
	}
	return ""
}

func (ds *DataStore) count() int {
	ds.Lock()
	defer ds.Unlock()
	return len(ds.cache)
}

func main() {
	store := New()
	store.set("Go", "Lang")
	result := store.get("Go")
	fmt.Println(result)
}
2 Likes

Concurent access of maps can lead to panics so you must protect them by locking (especially the writings).

3 Likes

Hm, that’s true, but the idea of my question is slightly different.

As I understand it the second method count() cannot get access to ds datastore because it’s already locked by mutex used in get method. That’s why it’s unclear that the author meant when he said:

Is it possible for count() take one more lock if ds has already been locked?

1 Like

In your particular case having one mutex I think you must unlock the resource (map) before access it from other method. I guess that is not a good idea to have nested locks on the same mutex.

2 Likes

Go doesn’t support recursive mutex. And there is arguments for not using them at all:

But if you really need them you can use special library, for example this one -

5 Likes

so it is recursive mutex that makes there a deadlock, because they aren’t supported

1 Like

You don’t need to use recursive mutex in your case, sync.RWMutex will be a solution for the issue here. Use RLock for reading, use Lock for writing.

func (ds *DataStore) set(key string, value string) {
	ds.Lock()
	defer ds.Unlock()
	ds.cache[key] = value
}

func (ds *DataStore) get(key string) string {
	ds.RLock()
	defer ds.RUnlock()
	if ds.count() > 0 {
		item := ds.cache[key]
		return item
	}
	return ""
}

func (ds *DataStore) count() int {
	ds.RLock()
	defer ds.RUnlock()
	return len(ds.cache)
}
5 Likes

RWMutex is not a solution here. From the documentation:

It should not be used for recursive read locking; a blocked Lock call excludes new readers from acquiring the lock. See the documentation on the RWMutex type.

Edit:
Oh, no. It was my misreading. Yeah, you’re right about this case, sorry.

I agree how recursive RWMutex can cause issues in general. Now retired, I am implementing a clean-room version of UniVerse, a Pick Model database. In a form of join, the Pick TFile - and Prime Information TRANS() constructs can read a table row (and column) from a primary key. With rows stored in hash tables, an RWMutex is held on the hash bucket. So, you need to nest the RWMutex so when the secondary read terminates the lock must be maintained. My solution was to track the requests (by table and hash bucket) and release the RWMutex when the count dropped to zero.
My current problem is eash user is a separate goroutine. If some user code creates an infinite loop, how to terminate that goroutine, or have it break out of “some loop”. If it is “looping” it would not loop through a select to see the context cancel.

1 Like

Don’t overengineer thing:
Replace the call to ds.count() inside get() with len(ds.cache) to avoid the breaking lock.

Or drop the whole if ds.count() > 0 block because you don’t need to check the length first.
Just return ds.cache[key].

3 Likes

I agree with @j-forster for your particular case but if you want to extend your code with other methods which could also create recursive locks perhaps you should use other methods like wrapping standard lock/unlock and put a semaphore to avoid locking the map if you already done this… or whatever.

1 Like

maybe this help :slight_smile:

To me, it looks like a database transactions. Maybe am I wrong

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.