GitXplorerGitXplorer
l

memo-async-fn

public
0 stars
0 forks
3 issues

Commits

List of commits on branch master.
Unverified
28835f6328b22f28b191707e7f875b77cb343ad5

docs: update example code

llyonbot committed 5 years ago
Unverified
3e85d10d909761f83aeae080cdeba442a44a744c

feat: rename

llyonbot committed 5 years ago
Unverified
7b7885f62b7795e38e2f59a6b2a3530cce9aafab

docs: readme

llyonbot committed 5 years ago
Unverified
3284b73f2a25f5a2fa4c19ef4081a9dcc5d31916

chore: remove big umd stuff

llyonbot committed 5 years ago
Unverified
3a8814498db3374297086e44e2106bbc9680644c

feat: README

llyonbot committed 5 years ago
Verified
db0d2822168b6536b80e6d9bc2b74276634266c7

Create nodejs.yml

llyonbot committed 5 years ago

README

The README file for this repository.

memo-async

combine async / promise calls and cache result in memory with LRU

npm

import memoAsync from 'memo-async'

const getUserInfo = memoAsync(   // <- magic
  async (userId) => {
    const { data } = await fetcher('http://xxx/', { userId })
    return data
  }
)

// parallel requests

const infos = await Promise.all([
  getUserInfo(12),   // send request
  getUserInfo(12),   // (cached) re-use 1st request
  getUserInfo(9),    // send request
  getUserInfo(12),   // (cached) re-use 1st request
])

// then request again

const user5 = await getUserInfo(12)   // get cached result,
                                      // or send request if last request failed

// after seconds...

const user6 = await getUserInfo(12)   // send request (cache expired)

API

this package provides memoAsync, which can be a utility function, or a decorator:

  • memoAsync(fn, opts) returns an wrapped async function, which may combine calls and cache result in memory.

    • fn : Function - your async function
    • opts : MemoOptions - optional, see below
  • memoAsync(opts) returns a class method decorator

    • opts : MemoOptions - optional, see below

    Note: each instance has its own LRU cache in memory by default.

    If you have many instances, consider using exact one LRUCache by setting opts.cache. Meanwhile, do not forget writing a opts.genKey

    decorator example

    class Reporter {
      @memoAsync()
      async readData(filename) {
        // some expensive requests
      }
    }
    
    const joe = new Reporter()
    // now joe.readData() may merge and cache requests

MemoOptions

  • genKey : (...args) => string

    compute the cache key from arguments.

    default: treat args as strings and concat them

    if you are using memoAsync within a class, you may use this while computing

  • duration : number

    duration of one batch, aka. how long a result can be cached.

    default: 3000 (ms)

  • batchSize : number

    how many requests (invoking) can be merged into one.

    default: 500 (# req)

  • cache : LRUCache

    use an existing lru-cache instance.

    if not set, memoAsync will create one.

  • cacheSize : number

    set the cache capacity. works when cache is not given.

    default: 1000

  • onHit : (key, result, args) => void callback when cache is hit.

    • key : string the cache key

    • result : Promise the cached Promise. you cannot change it

    • args : any[] array of arguments

    Note: if you are using memoAsync within a class, this will be set.