Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
457 views
in Technique[技术] by (71.8m points)

Azure Functions - Parallel/Concurrent function execution and scale out

I've recently started working with Azure Functions and (after reading SO and Microsoft docs) have been having trouble understanding scale out and parallel execution.

My situation is a function app with CRUD Azure Functions - they need to react quickly and concurrently like a REST API. However, when testing on my own browser and running 10 different tabs, it seems the tabs finish consecutively/sequentially (one after the other, the last tab waiting a LONG time).

I was wondering if I am missing something, or if there is a way to allow for parallel execution using some other Azure product?

(I've read into a few application settings and possibly using APIM or hosting the functions, but these didn't seem to be the answer.)

Thanks!

question from:https://stackoverflow.com/questions/65874016/azure-functions-parallel-concurrent-function-execution-and-scale-out

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

I think the cold start problem has already been mentioned.

Other than paying more (App Service plan or Premium plan), one other option is to write a little more code to save a bunch of money.

  • Add a new query param ?keepWarm=1 to your REST API endpoint you want to keep warm. Implementation of this function would be to return 200 if it's a keepWarm call.
  • Add a scheduled function (timer-trigger) that wakes up every X seconds, makes a call to /endpoint?keepWarm=1

Host this whole thing in consumption plan.

Even for X = 1 second, you'll probably end up paying a LOT less ($5-20) than other expensive plans ($100+ I think).


IMHO Premium and Dedicated plans are when you need more fire-power, not when you want to keep things warm. In fact Consumption plan would let you scale to 200 instances whereas the limit for other pricey plans is 10 to 100.

With pricey plans you do get more fire-power so you can do bigger tasks and take as long as you like:

  • 210-840 ACU
  • 1.75-14GB RAM
  • unbounded execution time limit (#)

($) unbounded execution time limit: If your trigger is HTTP Trigger (REST API) then this is useless due to load balancer limitation

If a function that uses the HTTP trigger doesn't complete within 230 seconds, the Azure Load Balancer will time out and return an HTTP 502 error. The function will continue running but will be unable to return an HTTP response.


On unit of scaling:

This is very unclear.

Scaling Function App instances

As described here.

In the Consumption and Premium plans, Azure Functions scales CPU and memory resources by adding additional instances of the Functions host. The number of instances is determined on the number of events that trigger a function.

Each instance of the Functions host in the Consumption plan is limited to 1.5 GB of memory and one CPU. An instance of the host is the entire function app, meaning all functions within a function app share resource within an instance and scale at the same time.

  • What's clear is
    • that when AFR (Azure Function Runtime) figures out that there is need to scale (based on traffic) it will spawn new Function Host(s), each Host contains the whole Function App along with all it's Functions.
    • you as a developer must create your functions in such a way that they limit their resource usage to what one Function Host offers.
  • What's NOT clear is
    • whether each Host would have only one instance of each Function, or multiple.
    • whether multiple different Functions within same App be executed in parallel or not. If yes, then each Function implementation needs to share Host resources with other Functions that co-exist in this App.

Scaling worker processes

Also it is possible to set FUNCTIONS_WORKER_PROCESS_COUNT to control number of language worker processes via Application Settings.

I guess in this case each language worker process would run within same host and would share resources.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...