Nothing Special   »   [go: up one dir, main page]

Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Is it ok to use traditional server-side rendering these days?
29 points by jamesmp98 on Dec 19, 2016 | hide | past | favorite | 48 comments
So I was going to start on a small ASP.Net core project today. It wasn't anything complicated, and I was just going to use server-side rendering with Razor. But then I forgot. It's almost 2017, I'll probably get kicked out of the web developer community for not using Angular, or React, or something else. Jokes aside, is it discouraged to use server-side rendering?



Is it OK? Who cares? As long as you solve the problem for your customer, it doesn't matter that much.

That said, I find server-side rendering to be more difficult when faced with the problem of creating the type of UI that people want these days. Generally I like the server to just be an API that a Javascript client talks to, because unless you want no dynamic/asynchronous features you're going to have to write that anyway. Now you're maintaining two apps, which is twice as much work. Sometimes it's necessary to do that, sometimes it's not... I always aim for only writing one UI, because it's like twice as fast as writing two. But no doubt people will reply to me saying they browse with Javascript disabled to get an extra 15 minutes of battery life out of their laptop, so there's that.


I browse with JS disabled by default because of shoddy ad networks and incredibly annoying dynamic effects on websites. (I go to like 5 websites on my phone because of how unusable the internet is without blocking that nonsense.)

Now, I may not be the target audience for your app, and that's fine, but it's unfair to dismiss people browsing without JS for strawman reasons.


I use a crowd-sourced hosts file on my system to block most of the ad-networks and metrics tracking sites.

Check out https://github.com/StevenBlack/hosts


Why not use an ad blocker?


I can use several things.

Adblockers generally do two things: block specific domains and change the rendering of certain elements.

Browsing with JS disabled handles things like stopping certain kinds of code -- even loaded from the primary domain -- from running.

Layered defense usually works better than a single method, and advertisers, privacy violators, etc have shown a willingness to enter a full on arms race. So I responded in similar measure.

The honest truth is I just don't want to do business with most people trying to run code on my computer, so if that's a sticking point for them on using their product, I just don't use it.


Good point


Why not both?


Well, for what it's worth, I get a lot more than 15 minutes and my browsing experience in general is just incredibly fast and smooth. :) It also affords me the ability to not be paranoid about whether there's anything malicious on the page.

If I'm expecting some kind of webapp then I'm fine with whitelisting your domain, though if you're basically just a content just I usually just browse away.

I confess I don't really understand your point though: In what sense would you be maintaining two apps? Aren't you ultimately maintaining exactly the same amount of code, if not more, the case just being that yours isn't as tightly coupled across a network?


You're maintaining two apps anyway b/c there is the client and the server. The OP didn't say it explicitly, but I'm assuming they are already building the server side no matter what and need to at a frontend. The backend development already builds a bunch of objects that will need to partially reimplemented in the frontend. It's just a matter of how many of them.

My two cents (to the OP) is that you should consider the people or teams building the thing. If you have one team doing all frontend and backend, doing it (almost) all server side makes sense. If you have a dedicated person doing the frontend, commit to a js framework and make a good API to the backend. Certainly don't make a frontend JS guru render html in Java or .NET.


:)

I browse with Javascript turned off because it cripples ads and reduces memory usage of my browser significantly.

I don't care what you think is necessary to serve a page of information. It almost certainly isn't. You have to earn the right to have javascript enabled or I'll just click away.


I browse with JavaScript disabled to avoid advertising.

It has the added side benefit of being a quality filter: if the site won't load properly or fallback gracefully without JavaScript enabled then it probably isn't worth my time.


Of course it's okay. Server-side rendering is still the most robust, accessible, SEO-friendly way of building websites and web applications and that's unlikely to change.


There are still many use cases where server side rendering makes sense.

For example, I work on a large, enterprise webapp, it has the frontend UI, which users interact with heavily: this, we have built out using clientside Javascript which interacts with the Rails backend via an API.

However, we also have an admin area -- many of those pages are used very rarely and are quite complicated. I initiated a change to build these panels using traditional server-side rendered ERB views. In the context of our codebase, it takes less code and it's easier to iterate on these without bothering with the Javascript framework.

In other cases, like my personal blog, I use Jekyll to generate a bunch of static HTML and, essentially cache all this on Github pages.

Javascript frameworks are amazing and can provide a much richer, much better experience. But, I don't necessarily think they're appropriate for all problems. One other big reason why, it just the inherent "experimental-ness" and, the churn / quick obsolescence you may find in this area. Some problems don't need to be re-solved every year or two. :)


> Javascript frameworks are amazing and can provide a much richer, much better experience.

I think this is the point worth inspecting if you're considering server side rendering. If you're making a page that just displays content, I don't see any reason to throw Javascript in front of it. Just render your HTML and send it. If you want to make changes to the content dynamically, if it's going to take super long to load and you want to show users a loading screen, if they can interact with the content, etc, that may be a case to throw Javascript into the equation.


This is incredibly true and something you don't realize until you're browsing the web on a slow connection.

I was travelling with 2G and using most client side apps was near impossible. One app sent down a 1.9MB `components.js` file, and because it was a JS app, all scripts had to be loaded until anything rendered.

Obviously server-side apps are in no way immune from this, but by default, unless you really do need a rich client experience, it's often not worth the bundle size and CPU load. Just send down HTML.

You know which website works fabulously over 2G? Hacker News.


I have a strong preference for a fast and low latency server-focused application compared to pretty much anything. It just feels more solid, and I am confident that I'm less likely to run into bugs than I would on a thick client. Let browsers focus on what they excel at, parsing and rendering documents, and let the server handle business logic. Which it has to anyway: the question is whether the client also has to.

That said, this is not the direction the industry is moving in. Sadly.


Server-side rendering is absolutely fine. The only caveat I would add is that if you want to change some functionality to a client-side JavaScript app later then having dynamically generated HTML will make this harder. For example, if you want to make it work offline in the future then you will probably have to rewrite the presentation layer.

Razor views in ASP.NET Core are really nice (disclaimer, I wrote a book on this so I'm obviously biased). However, if you are making a Single Page App then you will need to stick to HTTP APIs (returning JSON, not HTML). The controllers in Core can easily do both now that MVC 6 and Web API 2 are merged and the same thing.

The only exception to this is for an initial rendering of your app server-side, so that the browser gets populated HTML on the first load and there is something to see immediately. For example, using ReactJS.NET you can load the same model into your Razor view that you return from your API and pre-render it using view helpers.

If you're interested then I wrote more about this today: https://unop.uk/react-and-asp-net-core/

P.S. Has this whole thread been hidden or flagged?


Client-side rendering and the usage of JS MV* frameworks has made great inroads and is replacing the combination of 'server-side MVC framework plus rich AJAX on the client', because in the latter scheme you have two separate parts in foreign languages held together by thin glue, and can only do limited code and transfer of expertise.

That being said, server-side rendering will continue to be a great fit for cases where you ingest data and produce a deterministic, non-interactive (or not-too-interactive) result.

Remember, the issue is that in AJAX you'd receive some async result from the server and would have to splice it into the DOM, so you're throwing these contextless snippets of data on the wire and relying on your client-side JS to do the right thing; all the while you have an entire different server-side application spitting out complete well-formed HTML documents.

In a modern, virtual-domming JS MV* framework, these two functions become the same, and you don't have to architect two different solutions.


I am giving a talk on this in January internally:

Make SSR Great Again - "Swinging the pendulum back to server-side rendering"

https://www.evernote.com/shard/s17/sh/08a3fdd5-7ff7-40ea-b2d...


Interesting. You have this as one of your points:

"- https2 server push is anti-pattern. Doesn’t consult browser cache."

This is an odd claim, as server push is built to respect the browser cache and can even be used to populate it. In usual cases the server can know whether to push the dependencies of a requested resource based on the ETag header, working under the true-enough assumption that if a resource is in cache, its dependencies are also likely to be.

I can think of a thornier case where two resources have a shared dependency, and the server pro-actively pushes the shared dependency in response to a request for the second resource, oblivious to the fact that it's already in cache from a request for the first resource. I don't believe this case is enough to call server push an anti-pattern, though: the browser can cancel the push midstream if it already has it in cache. Although the server probably has already sent all of the dependency at that point.

Used appropriately, server push interacts nicely with caching and can even improve cacheability: I think servers can push 304 responses, for instance.


Yes, it's OK technically, morally, and legally. The most important thing is that your project does what it sets out to do.


And ethically don't forget about ethically.


I strongly advocate actually using SSR. Way quicker to build.


I use server side rendering sprinkled with some JS. Works great and my users are thanking me for the simple UI.


If your project is going to face the general public and needs the best chances of being indexed correctly by Google, you should start with some type of server side rendering.

Currently, I'd opt for using either an existing CMS like Wordpress or Shopify, depending on the needs, with a layer of React.js on top of it. Or a full stack React.js + Node.js custom build.

You'll find plenty of resources saying Google can index JS SPAs but regardless of what you think about that, the no JS user should not be left out in the cold.

If you're building an internal application where you can dictate the browser support I've actually found that I'm fastest completing projects building a JSON API with an Angular.js front end. React.js would work just as well of course, just depends on what you are more comfortable with.

Lastly, PageSpeed. Some clients don't care about this, but some do. If you're deciding on a stack, try running your build through Google PageSpeed Insight once in a while during the early stages to make sure you're not getting any road blocking red flags.

I personally don't care much for PageSpeed's reports, I have found them to be buggy or bizarre at times. But having your client rip your masterpiece apart just because they hired a marketer that's blaming all their shortcomings on a less than perfect PageSpeed score isn't fun.

I hope that helps! Let me know if you have any questions. I would be happy to clarify or elaborate.


Amazon still does. Every freaking filter checkbox you click reloads the whole page (although not sure about the header) and then loads stuff with Javascript afterwards as well.


One thing I haven't understood about JavaScript frameworks is how people go about setting meta tags for Facebook and Twitter sharing. These meta tags aren't readable by the Facebook scraper when set in JavaScript, so at least for my work, even if I do use some JavaScript framework, I also have to do some server side rendering so brands can share what they pay to share.

Maybe some people just don't care about this, and pure JavaScript works for them.


You likely have a simple HTML page that bootstraps the SPA; just put your meta tags there! UPDATE: Ah, set in JavaScript. Yes, tricky. Reading is good.


There are strategies,for this which usually involve a headless browser rendering the page for the spider. It's as cumbersome as it sounds.


Depends on your requirements. If you are chasing fads or just looking to learn, then you pick the "hottest" of the JS frameworks and go with that with the understanding that a new itch will be upon you in about 9 months and you will likely feel the need to scratch it. If you want your product to last through the latest fad, then continue to do server side rendering and augment it with the JS/Client-side framework of your choice.


You don't need client-side stuff until you do. If all you're doing is simple html form submissions, it doesn't make sense to integrate something like React.

But then you need validations, and server-side checks for availability, then dynamic forms based on previous input choices, and data synching to preview the profile on the right, and then - might as well use React or similar instead of jQuery with DOM manipulation.


Most companies work/develop enterprise software so its totally normal not to use the hippest architecture or js framework. You need to solve a business problem.

I'm working at a EMS company. Most of your web tools are developed with ASP.NET WebForms. New projects are developed with MVC or Web API but it is always more overhead than WebForms. You have to configure/develop more to get the same result at the end.


> Is it discouraged to use server-side rendering?

No, it's not! But it's discouraged to use it as the only rendering layer.

Server side rendering has its unbeatable advantages - speed. That's why you can view content on https://huu.la within second, similar methods are used by Google, Twitter, etc. A lot of people might say, we have faster networks now. Believe me, even at Bay Area, it takes time to, first, download your powerful client-side library, second, bootstrap your powerful client-side app, and third, call XHRs to fetch your data. Not to mention there are so many mobiles, with even less capable networks.

So from my experience developing Huula, I would recommend using a combined approach. For places where speed is critical, for example your landing page, help page, use server side rendering; for places where a lot of user interactions will happen, use client-side rendering since it gives your more support for componentization if that's a word and user interactions.


I know this is a joke post, but there's some happy middle ground that's possible too, without going full SPA.

If you render your pages and snippets on the server, a small amount of JS on the client can hotswap content. In the extreme of this idea, you get turbolinks from Rails where no real pageloads happen, but pages are rendered on the server.

But in the small, it can let you do a sprinkling of async calls where they make the most sense, and still keep all rendering of responses on the server.

Thoughtbot wrote about it recently:

https://robots.thoughtbot.com/how-we-replaced-react-with-pho...


I use entirely server-side rendering, with a slight bit of jQuery sprinkled in for the very small amount of dynamic interaction that isn't easily done via the traditional HTTP request model, like menus or popups.


Why wouldn't it be? The only people that genuinely care what technology or programming language is used are other developers. And even then, only in certain situations.

So unless you're making a community/social network for developers and the picky type are somehow your primary audience, it doesn't matter what you pick to code your site in or how its rendered.

The average Joe doesn't care whether it's something purely server side (like old school Perl) or React.


Of course the answer is "depends."

If you are building a web application that is behind login or you just don't care about SEO then ... No, you don't need server-side rendering.

If you are building a website with JavaScript functionality for making it easier for the user, then you probably want a JS-free version, so yeah, better do some server-side rendering and use JS as a progressive enhancement.


Use "universal" React to only write the frontend code once. The server renders the React app to HTML, and the client attaches the click handlers and other behavior.

Here's a good example: https://github.com/erikras/react-redux-universal-hot-example


A lot of websites work with very minimal JavaScript functionality. Yes, modern hype dictates things like SPA, but it isn't a standard, so unless you want to look like bleeding edge, server-side rendering is fine and might be preferable, especially if you are single developer and don't want to maintain two separate stacks.


What's your intended audience for the app?

Are you building an app for a doctor's office with low-tech ability users and low tech-level expectations, but extremely high reliability and speed expectations?

Or are you building a dating app for millennials who are going to expect the app to be shiny & cool?

As usual, the intended users' needs will inform your decision.


There is some movement to support universal JavaScript apps, i.e. render first hit with the server and serve the same JS code for the client to just fetch data for any page navigation after - both Angular and React support this.

That said, use whatever solves your needs/requirements/etc. in a timely fashion. Literally nothing else matters.


Twitter did full client-side rendering and moved back to partial server-side rendering to reduce perceived latency. Pushing business logic to the client has a cost attached.

Personally, I would default to server side rendering unless there's a very good reason to be rendering on the client instead (and there often is).


Server side rendering still beats js frameworks if your site depends on search engines for your traffic. I try to stick as much to server side rendering with only the required js files with minimal dependencies. It has worked wonders for my site.


More than ok, I'd say it's positively encouraged! I've seen more than my far share of over-engineered web applications that would have been much simpler and no worse off for being straightforward server side rendered apps.


Ummm... The page you are looking at is server-side rendered.


Yes, it's still ok. SPA (single page apps) should only be edge cases where you really want your website to feel like an app (see gmail as a good example).


What is "traditional server side rending"?


If it works well for what you are building.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: