Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,583
|
Comments: 51,212
Privacy Policy · Terms
filter by tags archive
time to read 3 min | 576 words

Recently, we added a way to track alerts across all the sessions the request. This alert will detect whenever you are making too many database calls in the same request.

But wait, don’t we already have that?

Yes, we do, but that was limited to the scope of one session. there is a very large set of codebases where the usage of OR/Ms is… suboptimal (in other words, they could take the most advantage of the profiler abilities to detect issues and suggest solutions to them), but because of the way they are structured, they weren’t previously detected.

What is the difference between a session and a request?

Note: I am using NHibernate terms here, but naturally this feature is shared among all profiler:

A session is the NHibernate session (or the data/object context in linq to sql / entity framework), and the request is the HTTP request or the WCF operation. If you had code such as the following:

public T GetEntity<T>(int id)
{
    using (var session = sessionFactory.OpenSession())
    {
         return session.Get<T>(id);
    }
}

This code is bad, it micro manages the session, it uses too many connections to the database, it … well, you get the point. The problem is that code that uses this code:

public IEnumerable<Friends> GetFriends(int[] friends)
{
   var results = new List<Friends>();
   foreach(var id in friends)
       results.Add(GetEnttiy<Friend>(id));

   return results;
}

The code above would look like the following in the profiler:

Image1

As you can see, each call is in a separate session, and previously, we wouldn’t have been able to detect that you have too many calls (because each call is a separate session).

Now, however, we will alert the user with a too many database calls in the same request alerts.

Image2

time to read 2 min | 213 words

We have recently been doing some work on Uber Prof, mostly in the sense of a code review, and I wanted to demonstrate how easy it was to add a new feature. The problem is that we couldn’t really think of a nice feature to add that we didn’t already have.

Then we started thinking about features that aren’t there and that there wasn’t anything in Uber Prof to enable, and we reached the conclusion that one limitation we have right now is the inability to analyze your application’s behavior beyond the session’s level. But there is actually a whole set of bad practices that are there when you are using multiple sessions.

That led to the creation of a new concept the Cross Session Alert, unlike the alerts we had so far, those alerts looks at the data stream with a much broader scope, and they can analyze and detect issues that we previously couldn’t detect.

I am going to be posting extensively on some of the new features in just a bit, but in the meantime, why don’t you tell me what sort of features do you think this new concept is enabling.

And just a reminder, my architecture is based around Concepts & Features.

time to read 1 min | 191 words

Originally posted at 1/7/2011

Because I keep getting asked, this feature is available for the following profilers:

This feature is actually two separate ones. The first is the profiler detecting what is the most expensive part of the query plan and making it instantly visible. As you can see, in this fairly complex query, it is this select statement that is the hot spot.

image

Another interesting feature that only crops up whenever we are dealing with complex query plans is that the query plan can get big. And by that I mean really big. Too big for a single screen.

Therefore, we added zooming capabilities as well as the mini map that you see in the top right corner.

time to read 1 min | 144 words

Originally posted at 1/7/2011

This is another oft requested feature that we just implemented. The new feature is available for the full suite of Uber Profilers:

You can see the new feature below:

image

I think it is cute, and was surprisingly easy to do.

Uber Prof have recently passed the stage where it is mostly implemented using itself, so I just had to wire a few things together, and then I spent most of the time just making sure that things aligned correctly on the UI.

time to read 2 min | 251 words

Originally posted at 12/3/2010

One of the things that are coming to NH Prof is more smarts at the analysis part. We now intend to create a lot more alerts and guidance. One of the new features that is already there as part of this strategy is detecting bad ‘like’ queries.

For example, let us take a look at this

image

This is generally not a good idea, because that sort of query cannot use an index, and requires the database to generate a table scan, which can be pretty slow.

Here is how it looks like from the query perspective:

image

And NH Prof (and all the other profilers) will now detect this and warn about this.

image

In fact, it will even detect queries like this:

image

time to read 9 min | 1716 words

Originally posted at 12/3/2010

I recently had a discussion on how I select features for NH Prof.  The simple answer is that I started with features that would appeal to me.  My dirty little secret is that the only reason NH Prof even exists is that I wanted it so much and no one else did it already.

But while that lasted for a good while, I eventually got to the point where NH Prof does everything that I need it to do. So, what next… ?

Feature selection is a complex topic, and it is usually performed in the dark, because you have to guess at what people are using. A while ago I setup NH Prof so I can get usage reports (they are fully anonymous, and were covered on this blog previously). Those usage reports come in very handily when I need to understand how people are using NH Prof. Think about it like a users study, but without the cost, and without the artificial environment.

Here are the (real) numbers for NH Prof:

Action % What it means
Selection 62.76% Selecting a statement
Session-Statements 20.58% Looking at a particular session statements
Recent-Statements 8.67% The recent statements (default view)
Unique-Queries 2.73% The unique queries report
Listening-Toggle 1.10% Stop / Start listening to connections
Session-Usage 0.91% Showing the session usage tab for a session
Session-Entities 0.54% Looking at the loaded entities in a session
Query-Execute 0.50% Show the results of a query
Connections-Edit 0.38% Editing a connection string
Queries-By-Method 0.34% The queries by method report
Queries-By-Url 0.27% The queries by URL report
Overall-Usage 0.25% The overall usage report
Show-Settings 0.23% Show settings
Aggregate-Sessions 0.21% Selecting more than 1 session
Reports-Queries-Expensive 0.16% The expensive queries report
Session-Remove 0.13% Remove a session
Queries-By-Isolation-Level 0.08% The queries by isolation level report
File-Load 0.04% Load a saved session
File-Save 0.03% Save a session
Html-Export 0.02% Exporting to HTML
Sessions-Diff 0.01% Diffing two sessions
Sort-By-ShortSql 0.01% Sort by SQL
Session-Rename 0.01% Rename a session
Sort-By-Duration 0.01% Sort by duration
Sort-By-RowCount > 0.00% Sort by row count
GoToSession > 0.00% Go from report to statement’s session
Sort-By-AvgDuration > 0.00% Sort by duration (in reports)
Production-Connect > 0.00% (Not publically available) Connect to production server
Sort-By-QueryCount > 0.00% Sort by query count (in reports)
Sort-By-Alerts > 0.00% Sort by alerts (for statements)
Sort-By-Count > 0.00% Sort by row count

There is nothing really earth shattering here, by far, people are using NH Prof as a tool to show them the SQL. Note how most of the other features are used much more rarely. This doesn’t mean that they are not valuable, but it does represent that a feature that isn’t immediately available on the “show me the SQL” usage path is going to be used very rarely.

There is another aspect for feature selection, will this feature increase my software sales?

Some features are Must Have, your users won’t buy the product without them. Some features are Nice To Have, but have no impact on the sale/no sale. Some features are very effective in driving sales.

In general, there is a balancing act between how complex a feature is, how often people will use it and how useful would it be in marketing.

I learned quickly that having better analysis (alerts) is a good competitive advantage, which is why I optimized the hell out of this development process. In contrast to that, things like reports are much less interesting, because once you got the Must Have ones, adding more doesn’t seem to be an effective way of going about things.

And then, of course, there are the features whose absence annoys me…

time to read 4 min | 685 words

I was giving a talk yesterday at the Melbourne ALT.Net group, the topic of which was How Ayende’s Build Software. This isn’t the first time that I give this talk, and I thought that talking in the abstracts is only useful so much.

So I decided to demonstrate, live, how I get stuff done as quickly as I am. One of the most influential stories that I ever read was The Man Who Was Too Lazy to Fail - Robert A. Heinlein. He does a much better job in explain the reasoning, but essentially, it comes down to finding anything that you do more than once, and removing all friction from it.

In the talk, I decided to demonstrate, live, how this is done. I asked people to give me an idea about a new feature for NH Prof. After some back and forth, we settled on warning when you are issuing a query using a like that will force the database to use a table scan.  I then proceed to implement the scenario showing what I wanted:

public class EndsWith : INHibernateScenario
{
    public void Execute(ISessionFactory factory)
    {
        using(var s = factory.OpenSession())
        {
            s.CreateCriteria<Blog>()
                .Add(Restrictions.Like("Title", "%ayende"))
                .List();
        }
    }
}

Implemented the feature itself, and tried it out live. This showed off some aspects about the actual development, the ability to execute just the right piece of the code that I want by offering the ability to execute individual scenarios easily.

We even did some debugging because it didn’t work the first time. Then I wrote the test for it:

public class WillDetectQueriesUsingEndsWith : IntegrationTestBase
{
    [Fact]
    public void WillIssueWarning()
    {
        ExecuteScenarioInDifferentAppDomain<EndsWith>();

        var statementModel = model.RecentStatements.Statements.First();

        Assert.True(
            statementModel.Alerts.Any(x=>x.HelpTopic == AlertInformation.EndsWithWillForceTableScan.HelpTopic)
            );
    }
    
}

So far, so good, I have been doing stuff like that for a while now, live.

But it is the next step that I think shocked most people, because I then committed the changes, and let the CI process takes care of things.  By the time that I showed the people in the room that the new build is now publically available, it has already been download.

Now, just to give you some idea, that wasn’t the point of this talk. I did a whole talk on different topic, and the whole process from “I need an idea” to “users are the newly deployed feature” took something in the order of 15 minutes, and that includes debugging to fix a problem.

time to read 2 min | 224 words

Originally posted at 11/25/2010

In a recent post, I discussed the notion of competitive advantage and how you should play around them. In this post, I am going to focus on Uber Prof. Just to clarify, when I am talking about Uber Prof, I am talking about NHibernate Profiler, Entity Framework Profiler, Linq to SQL Profiler, Hibernate Profiler and LLBLGen Profiler. Uber Prof is just a handle for me to use to talk about each of those.

So, what is the major competitive advantage that I see in the Uber Prof line of products?

Put very simply, they focus very heavily on the developer’s point of view.

Other profilers will give you the SQL that is being executed, but Uber Prof will show you the SQL and:

  • Format that SQL in a way that make it easy to read.
  • Group the SQL statements into sessions. Which let the developer look at what is going on in the natural boundary.
  • Associate each query with the exact line of code that executed it.
  • Provide the developer with guidance about improving their code.

There are other stuff, of course, but those are the core features that make Uber Prof into what it is.

time to read 2 min | 291 words

The following features apply to NHProf, EFProf, L2SProf.

In general, it is strong discouraged to data bind directly to an IQueryable. Mostly, that is because data binding may actually iterate over the IQueryable several times, resulting in multiple queries being generated from something that can be done purely in memory. Worse, it is actually pretty common for data binding to result in lazy loading, and lazy loading from data binding almost always result in SELECT N+1. The profiler can now detect and warn you about such mistakes preemptively. More than that, the profiler can also now detect queries that are being generated from the views in an ASP.Net MVC application, another bad practice that I don’t like.

You can find more information about each warnings here:

WPF detection:

image

 

image

WinForms detections:

image

image

Web applications:

image

image

time to read 3 min | 406 words

The following features apply to NHProf, EFProf, HProf, L2SProf.

The first feature is something that was frequently requested, but we kept deferring. Not because it was hard, but because it was tedious and we had cooler features to implement: Sorting.

image

Yep. Plain old sorting for all the grids in the application.

Not an exciting feature, I’ll admit, but an important one.

The feature that gets me exciting is the Go To Session. Let us take the Expensive Queries report as a great example for this feature:

image

As you can see, we have a very expensive query. Let us ignore the reason it is expensive, and assume that we aren’t sure about that.

The problem with the reports feature in the profiler is that while it exposes a lot of information (expensive queries, most common queries, etc), it also lose the context of where this query is running. That is why you can, in any of the reports, right click on a statement and go directly to the session where it originated from:

image

image

We bring the context back to the intelligence that we provide.

What happen if we have a statement that appear in several sessions?

image

You can select each session that this statement appears in, getting back the context of the statement and finding out a lot more about it.

I am very happy about this feature, because I think that it closes a circle with regards to the reports. The reports allows you to pull out a lot of data across you entire application, and the Go To Session feature allows you to connect the interesting pieces of the data back to originating session, giving you where and why this statement was issued.

FUTURE POSTS

  1. Production postmorterm: The rookie server's untimely promotion - about one day from now

There are posts all the way to Jun 11, 2025

RECENT SERIES

  1. Production postmorterm (2):
    02 Feb 2016 - Houston, we have a problem
  2. Webinar (7):
    05 Jun 2025 - Think inside the database
  3. Recording (16):
    29 May 2025 - RavenDB's Upcoming Optimizations Deep Dive
  4. RavenDB News (2):
    02 May 2025 - May 2025
  5. Production Postmortem (52):
    07 Apr 2025 - The race condition in the interlock
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}