BeautifulSoup can only help with the HTML you give it; you'll need to cause LinkedIn to return more HTML. The content isn't in the HTML you have, so you must get it. The browser is probably running LinkedIn's javascript to notice that you're scrolling and therefore it needs to fetch more content and inject more HTML in the page - you need to replicate this content fetch somehow.
Bad news: BeautifulSoup isn't aware of APIs or javascript. You'll need another tool.
Good news: there are tools for this! You could certainly use Selenium, that would probably be the simplest way to solve this, since it would replicate the browser environment pretty well for these purposes.
If you are absolutely committed to not using Selenium, I recommend you deep-dive on the LinkedIn site and see if you can figure out which bits of javascript are responsible for fetching more data, and replicate the network requests they make, and then parse that data yourself.
For most people, though, Selenium will be the right answer.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…