Hibernating Rhinos

Zero friction databases

Hibernating Rhino is featured in DZone’s latest research guide

Growing Database Trends: NoSQL and Polyglot Persistence

Hibernating Rhino is featured in DZone’s latest research guide, the Guide to Database and Persistence Management. This guide features articles from industry experts, key findings from a survey of over 800 developers, an infographic differentiating between databases, and a solution directory of the leaders in database and persistence management. You can download the free research guide here: http://bit.ly/1BmCMbb

As the amount of data generated grows, companies are facing new and unprecedented data management challenges. It’s estimated that the amount of data we currently have will double every two years -- making these challenges more difficult to overcome. DZone’s Guide to Database and Persistence Management is a valuable handbook for understanding and conquering the challenges posed by modern database usage.

Based on DZone’s survey, performance, scalability and data migration are the biggest challenges related to databases.

DZone’s research guide also reports that 37% of respondents expect that their data management technologies will not be sufficient by next year, and they expect to add a new database product to their software by 2016.

 

NoSQL Databases on the Rise

While SQL databases have dominated the IT industry, experts are suggesting that organizations use a multi-database solution or polyglot persistence approach. This gives your organization more flexibility for organizing various types of data, while allowing you to handle each type of data with a more efficient database.

DZone reports that 30% of respondents’ organizations currently use a polyglot persistence approach with at least one relational database and one non-relational database. They also reported that 80% of respondents used two or more database products in their organization.

Read DZone’s Guide to Database and Persistence Management to navigate the multitude of persistence options, while discussing the challenges of analyzing data in organizations.

Tags:

Published at

Originally posted at

Spatial Sorting

Hi all,

I’d like to share with you a new feature we were working on in the sphere of spatial queries.

It began from an issue, telling that we need to have the ability to sort query results by distance from a given point, without limiting the scope of the query. Truth to be told, it took me some time to understand what the problem was and why are the results were limited at the first place. To explain that point, I'll give an example of how spatial queries looks like in the Client API:

Let’s say that our document looks like:

public class Shop { public string Name { get; set; } public double Lat { get; set; } public double Lng { get; set; } }

And our index looks like:

public class ShopSpatialIndex : AbstractIndexCreationTask<Shop> { public ShopSpatialIndex() { Map = shops => from shop in shops select new { shop.Name, _ = SpatialGenerate(shop.Lat,shop.Lng) }; } }

Now, there are various ways to perform a spatial query, somehow I stumbled upon the most detailed and complex one:

store.DatabaseCommands.Query("ShopSpatialIndex", new SpatialIndexQuery() { QueryShape = SpatialIndexQuery.GetQueryShapeFromLatLon(36.15632, 51.70375, 100), SpatialRelation = SpatialRelation.Within, SpatialFieldName = Constants.DefaultSpatialFieldName, SortedFields = new[] { new SortedField(Constants.DistanceFieldName), } });

In that case, you can see that we are allowed to mention a shape, the relation the shape should have to the data, queried field and sorted field. The problem is that the sorting is done according to the center of the given field.

After looking a bit more, I found out that there is a special kind of relation: “Nearby”,  that one allows to do a sorting, without filtering:

store.DatabaseCommands.Query("ShopSpatialIndex", new SpatialIndexQuery() { QueryShape = SpatialIndexQuery.GetQueryShapeFromLatLon(36.15632, 51.70375, 100), SpatialRelation = SpatialRelation.Nearby, SpatialFieldName = Constants.DefaultSpatialFieldName, SortedFields = new[] { new SortedField(Constants.DistanceFieldName), } });

Now I  had a new problem: I cannot do both. Example for case when I'd like to do that is if I'm searching for Shops in a particular city, I stand inside or outside it and I want the results to be ordered by proximity:

 

image

Although the “nearby” relation inspired me, I couldn’t use it in order to solve the problem. What I chose to do is to allow passing the center of the sorting with the sorted field name, in manner of CSV parameter concatenation, no worry, I encapsulated it into a nice API. Although this solution made the complex, and most detailed writing of the query to look even less intuitive:

store.DatabaseCommands.Query("ShopSpatialIndex", new SpatialIndexQuery() { QueryShape = SpatialIndexQuery.GetQueryShapeFromLatLon(36.15632, 51.70375, 100), SpatialRelation = SpatialRelation.Nearby, SpatialFieldName = Constants.DefaultSpatialFieldName, SortedFields = new[] { new SortedField(string.Format("{0};{1};{2}", Constants.DistanceFieldName, 36.15612, 51.70355)), } });

The alternative methods to perform spatial querying now looks like:

session.Query<Shop>() .Customize(x => x.WithinRadiusOf(100, 36.15632, 51.70375)) .OrderByDistance(new SpatialSort { Latitude = 36.15612, Longitude = 51.70355, FieldName = Constants.DefaultSpatialFieldName });

or

session.Query<Shop>() .Customize(x => x.WithinRadiusOf(100, 36.15632, 51.70375)) .Customize(x => x.SortByDistance(100, 36.15632, Constants.DefaultSpatialFieldName))

or

session.Query<Shop>() .Customize(x => x.WithinRadiusOf(100, 36.15632, 51.70375)) .Customize(x => x.SortByDistance(36.15612, 51.70355))

Hope you find it useful and it will encourage you to explore and use this and other spatial functionality of RavenDB.

Tags:

Published at

Originally posted at

A appreciation letter from a nice customer

One week after releasing the Appender Only nuget package, a feature that has been requested in the NHProf mailing list by a customer of us named Dan Plaskon, I get the following appreciation letter from him:

appender-only-appreciation-latter

We always try to response quickly to our customers reports and tell the customer “this will be fixed in the next build”, which will usually take between a few hours (in most of the cases) to a few days (in more complex issues).

In this case, the user requested a new nuget package which will make the life of our veteran profiler’s users much easier. We implemented this feature a day after the second request from him and it is nice to see the fast appreciation response to our hard work!

Published at

Originally posted at

New: Uber Profiler “appender only” nuget package is now available

We started to provide nuget packages for our Uber Profilers early on when nuget was available. Back then, we decided to provide you with a one click solution: you add the NHibernate Profiler (or any other of our profilers) nuget package to your project, and you can start the profiler right away. We open the profiler interface after you install the package and add some initialization code the your project to hook the profiler.

That worked great most of time, when you do a fresh installation of the profiler. But when you are doing upgrades and you do not want to use the default initialization code we provide, it can be annoying to keep removing it once you upgrade. This is way we created this issue, based on user demand.

So you can now use the NHibernateProfiler.Appender or EntityFrameworkProfiler.Appender (the same for our other profilers) package, which contains just the appender dll. No code setup and it is also a much slimmer package, so package restore will also be faster.

Test Suite for RavenDB - rough ideas

By Jakub Rusilko

For the last couple of weeks we have been trying to come up with a good way of assuring better stability of RavenDB. It turns out that manual tests and classical unit tests are not enough. Thus we figured that a complex test suite is needed.

We have distinguished a few separate things that we would like to be able to test. The first is the client API which can be broken into two categories:

  1. the .NET API
  2. the REST API

Within the .NET API we can further distinguish some different aspects that we would like to cover:

  1. sync vs async
  2. remote vs embeded
  3. self host vs IIS

Apart from that we'll also create tests for the external tools:

  1. Backup
  2. Smuggler.

These will include migration tests.

In every test we'll try to cover the correctness of the results as well as the execution times. We are also planning to be able to run stress tests in single and multiple threads.

What we are currently trying to design is the way to be able to easily test different versions of RavenDB so that we can compare things across versions. Our ultimate goal is to be able to collect all the described above test data and later produce meaningful charts or graphs based on the data to help in tweaking the code. We hope that the results will not only show us whether something works or not in a given version but also how the execution time of specific operations change in different database versions.

At this point it is difficult to be more specific about the project. We are still just trying out different approaches to see what will work best for us.

Tags:

Published at

Originally posted at

Comments (1)

Get an insight into what Entity Framework really does under the hoods using EFProf

When using relational databases like SQL Server, MySQL, PostgreSQL, using an O/RM tool like Entity Framework can drastically simplify how you interact with the database data by mapping (or abstracting) the relational data into objects, which are much easier to consume and use in your application.

The problem with such an abstraction is that in a lot of cases, you, or someone else in your team, may use the ORM in a way it you should not be used, which will lead to an application with bad performance. And the real issue about that is that most of the time developers tries to solve those issues down the road, when have some basic mistakes all over the place.

In order to avoid such errors, we developed the Entity Framework Profiler which will give you an insight about what is going on on under the hoods, by showing you the underline SQL that was generated and giving you some highlights/warnings on a bad SQL which should be improved.

In order to see the benefits you can get from using the Entity Framework Profiler we will take nopCommerce, a popular open source e-commerce application that make use of Entity Framework profiler it with the Entity Framework Profiler.

I opened the NopCommerce solution, set the Nop.Web as the default project, installed the EFProf NuGet package and run the website by hitting F5. The browser opened and I get an installation form to fill. I enter the connection string to the SQL Server and I choose the “Use sample data” option. Now let look on the profiler to see what actually the installation action did.

Step1

See what happened. We opened the object context session 3 times. The first object context does nothing. The second one executes 69 queries, most of them to create indexes or stored procedures.  And the third one is inserting our sample data. In both of them we get some alerts such as Select N+1, Too Many Joins or Too Many Database Calls. But since this is an installer which take place just once so we can ignore that.

Now, lets make a search for a product. Searching for shoes, make like 29 queries to the DB:

Step4

As we can see in the alerts section, we can lower the number of queries that we having by fixing the select N+1 issue that we have here.

Clicking on the Computers category and after that on the electronics category and looking on the profiler, I see the following:

Step5

Note that it executed 87 queries. Let pick one of them and analyze it:

Step6

Wow, can you figure out when this query tries to do? I cannot. There is not wondering why the profiler gives the following alerts on this statement:

Step7

So, how can I figure it out? Without the EF Profiler, there is no why to connect between an SQL query to the code that was responsible to generate it. But with EFProf we can just look on the stack trace window and jump right to the code that generated this query:

Step8

And here it is:

/// Gets all categories filtered by parent category identifier
/// </summary>
/// <param name="parentCategoryId">Parent category identifier</param>
/// <param name="showHidden">A value indicating whether to show hidden records</param>
/// <returns>Category collection</returns>
public virtual IList<Category> GetAllCategoriesByParentCategoryId(int parentCategoryId,
    bool showHidden = false)
{
    string key = string.Format(CATEGORIES_BY_PARENT_CATEGORY_ID_KEY, parentCategoryId, showHidden, _workContext.CurrentCustomer.Id, _storeContext.CurrentStore.Id);
    return _cacheManager.Get(key, () =>
    {
        var query = _categoryRepository.Table;
        if (!showHidden)
            query = query.Where(c => c.Published);
        query = query.Where(c => c.ParentCategoryId == parentCategoryId);
        query = query.Where(c => !c.Deleted);
        query = query.OrderBy(c => c.DisplayOrder);

        if (!showHidden)
        {
            //ACL (access control list)
            var allowedCustomerRolesIds = _workContext.CurrentCustomer.CustomerRoles
                .Where(cr => cr.Active).Select(cr => cr.Id).ToList();
            query = from c in query
                    join acl in _aclRepository.Table
                    on new { c1 = c.Id, c2 = "Category" } equals new { c1 = acl.EntityId, c2 = acl.EntityName } into c_acl
                    from acl in c_acl.DefaultIfEmpty()
                    where !c.SubjectToAcl || allowedCustomerRolesIds.Contains(acl.CustomerRoleId)
                    select c;

            //Store mapping
            var currentStoreId = _storeContext.CurrentStore.Id;
            query = from c in query
                    join sm in _storeMappingRepository.Table
                    on new { c1 = c.Id, c2 = "Category" } equals new { c1 = sm.EntityId, c2 = sm.EntityName } into c_sm
                    from sm in c_sm.DefaultIfEmpty()
                    where !c.LimitedToStores || currentStoreId == sm.StoreId
                    select c;

            //only distinct categories (group by ID)
            query = from c in query
                    group c by c.Id
                    into cGroup
                    orderby cGroup.Key
                    select cGroup.FirstOrDefault();
            query = query.OrderBy(c => c.DisplayOrder);
        }

        var categories = query.ToList();
        return categories;
    });

}

As you can see here, it at least using a cacheManager, but even with it we see un-tuned queries.

When you develop an application and uses the Entity Framework Profiler, you can be aware about those issue right from the beginning and avoid repeating the same mistakes, over and over again.

Tags:

Published at

Originally posted at

Feedback is required: Can we require you to have .NET 4.5 in order to use the profiler UI?

As we want to use the async keyword in the profiler’s client, we think to start targeting the .NET Framework 4.5.

Considering that .NET Framework 4.5 came out a year ago and it is distributed on Windows 8, we think it is acceptable to ask our users to have the .NET Framework installed on the machine.

Users that still need to use the profiler on a machine with just .NET 4.0 would have to use the recent build of the profiler, which is 2197. But since that this build requires this update for the .NET Framework 4.0, we figure out that it is better to require the .NET Framework 4.5 (in order to avoid with strange error for users that do not have this update) or drop the use of the async keyword, which we hope to avoid.

Please let us know your opinion about this change. We will wait two weeks or so in order to get a feedback from you guys, before pushing a new version to production with this change.

Tags:

Published at

Originally posted at

Comments (13)

Web API 2–the SPA user interface review

I started to look on the sample code of the Single Page Template which uses the Web API 2 and ASP.NET MVC 5 beta and what I see it interesting.

It got a very nice UI implementation, and the authentication options that are built in are great:

Login

Basically, there is an built-in OAuth integrated authentication against Google, Facebook, Twitter and Microsoft, in addition to a classic local authentication feature. This looks very nice.

Let’s try log-in using Google. I pressed the Google button and the page it redirected to the Google login page. After entering my Google account’s credentials, Google asked me whether should it allow localhost to access my data. I’m selecting to accept button:

GoogleLogin

After pressing on the accept, the page is redirected back to my application and asks my to sign up using the my Google account.

SignUp

The requirement to specify the user name here seems to indicate that they use username to identify users instead of emails. Personally, I think that websites should ask for an email in order to identify a user. If they did that, the username can be configured later, from the account settings. Also, using my name as the User Name seems strange as user name doesn’t contain a spaces usually. But anyway… we will leave this aside.

I’m pressing on the Sign up button.

ToDoList

The UI seems to be really nice. It all AJAX powered and looks good. I was able to add an item, check it as done, but when I tried to delete it I got a mysterious “Error removing todo item.” error.

Anyway, let’s look more on the authentication options. I’m clicking on my user name, which redirect me to the “Account/Manage” page.

AccoutManage

Here I can see that I can specify a password, so I’ll be able to authenticate with a local username. So, let’s set a password.

SetPassword

After doing so, I get a “Password changed” message with an option to change the password. But also note that since I have now two options to log in to the website, it gives me the option to remove one, which is nice.

So it seems that I have all the authentication needs that a website is typically needs built-in, and they seems to work well. In the next post I’ll start to dig into the implementation of this, which has some surprises.

Published at

Originally posted at

Comments (2)

New Client Coming to RavenDB – Mono For Android

You requested it and we listened, we now have a client for Android (with mono).

The client supports all the regular client side operation except from Distributed Transactions which are not relevant for mobile devices

The Client will be available through a Nuget Package or by referencing the dll to your project

All you need to do is reference the Raven.Client.MonoForAndoird.dll to your Mono for android project.

In there you can use the code as you normally would.

If you want to start working with this already you can download the dll here:

https://www.dropbox.com/s/e2n5fioobv3v09u/Raven.Client.MonoForAndroid.dll

Pay attention this is an early build, so if you have something not working, we will appreciate if you could create a failing test for it

You can see how we are testing from this repository https://github.com/DanielDar/ravendb on branch 2.5

Look at the project Raven.Tests.MonoForAndroid

There is also Raven.Tryouts.MonoForAndroid for testing a complete app

You can see a sample for the using android with raven here: https://github.com/DanielDar/RavenAndroidToDoList

In this sample you have a to do list that is updated between all of the android devices running it.

Tags:

Published at

Originally posted at

Comments (1)

EFProf now supports EF 6 alpha 3

We got a mail from a customer asking about the support of Entity Framework 6 alpha 3 by the Entity Framework Profiler. It took 2 hours before we got a new build which now support it.

Part of this speedy patch was that the customer provided a nice small project that reproduce the issue, which is always the best way to get things fixed fast, but the most critical thing was that the EF team addressed the issues that we faced in the previous release, EF 6 alpha 2, as outlined here.

So if you are using EF 6 alpha 3, the workaround that was outlined here is not needed anymore.  You should just call `EntityFrameworkProfiler.Initialize()` at the very first of your application and this is it.

Tags:

Published at

Originally posted at

Use ProfilerIntegration static class in order to hook into the profiler

A user asked us how can he use this class so I thought it may be a good idea to blog about that.

You can use the HibernatingRhinos.Profiler.Appender.ProfilerIntegration static class in order to hook into the profiler and do some interesting staff.

The two most uses of this class is 1. telling the profiler to ignore some code and not record it. 2. record something that wasn’t executed by the ORM but by ADO.NET directly in the profiler.

Ignore and not profile a portion of code

This is mostly useful when running unit tests and you need to do some initialization to the database which you prefer not to see in the profiler output.

You can do this using:

using (ProfilerIntegration.IgnoreAll())
{
    // Do something
}

Or

ProfilerIntegration.IgnoreAll();
// Do something
ProfilerIntegration.ResumeProfiling();

Record custom messages inside the profiler

This is mostly useful if you doing some use of the pure ADO.NET api alongside to the ORM that you’re using and you want to record those actions.

In this case, since you aren’t using the DbConnection/DbDataReader/DbCommnad etc that the profiler created, you’ll need to notify the profiler about each action.

This can be done using the following code:

ProfilerIntegration.PublishProfilerEvent("session id", "logger name", "message");

This is 3 parameters:

Session ID = a string that identify the session to which this statement belongs. In NHibernate you’ll want to use ((SessionImpl)session).SessionId.ToString(). In the other ORM, you can use some GUID which should be the same for statements in this session.

Logger name = specifies the log type. This is ORM specific, in NHibernate you’ll use "NHibernate.SQL" in order to log a statement while that in EF this will be "EntityFramework.Sql".

Message = The actual message to record. If this an SQL statement, you’ll provide the SQL statement here.

Example:

            const string sql =
                @"SELECT
        first 5 this_.LOG_ID as LOG1_0_0_,
        this_.LOG_DATE as LOG2_0_0_,
        this_.APP_NAME as APP3_0_0_,
        this_.LOGGER as LOGGER0_0_,
        this_.LOG_LEVEL as LOG5_0_0_,
        this_.MESSAGE as MESSAGE0_0_,
        this_.EXCEPTION_MESSAGE as EXCEPTION7_0_0_,
        this_.CONTEXT as CONTEXT0_0_
FROM APP_LOG this_ ORDER BY this_.LOG_DATE desc";

            using (var s = (SessionImpl)factory.OpenSession())
            {
                ProfilerIntegration.PublishProfilerEvent(
                    s.SessionId.ToString(),
                    "NHibernate.SQL",
                    sql);
            }

PublishProfilerEvent is really low level API. If you’re using Entity Framework Profiler you can use the EntityFrameworkAppender class which provides more high level API:

   1:  var sessionId = Guid.NewGuid();
   2:  var entityFrameworkAppender = new EntityFrameworkAppender("My app");
   3:  entityFrameworkAppender.ConnectionStarted(sessionId);
   4:  var statementId = Guid.NewGuid();
   5:  entityFrameworkAppender.StatementExecuted(sessionId, statementId, "Select ...");
   6:  entityFrameworkAppender.StatementRowCount(sessionId, statementId, 8);
   7:  entityFrameworkAppender.StatementError(sessionId, new InvalidOperationException("Report an error to the profiler"));
   8:  entityFrameworkAppender.ConnectionDisposed(sessionId);

But this is considered internal API which may be changed.

NHibernateProfiler doesn’t expose a high level API like this, so you’ll need to call ProfilerIntegration.PublishProfilerEvent with the correct logger name. If you would like to know a particular logger name you can email us and we’ll provide you with the information that you need.

Tags:

Published at

Originally posted at

Use log4net version 1.2.11.0 with NHibernate Profiler

We got lots of requests to support log4net 1.2.11.0 when using the web.config or app.config to configure the profiler. We already supported log4net 1.2.11 if you initialized the profiler using code, eg NHibernateProfiler.Initialize(), by auto compiling our inner appedner against the version of log4net that your application is using. But when you initialize the profiler using config file you need an assembly that is already compiled against log4net.

We thought how we can support this, and we came up with an interesting solution.

Starting with build #2122, we let you call the following code from your application, which will generate an assembly that will be compiled against the version of log4net that your application is using, and will be saved to the location that you can specify in the parameter:

NHibernateProfiler.GenerateAssemblyForCustomVersionOfLog4Net(AppDomain.CurrentDomain.BaseDirectory);

This will create the following dll in the output folder:

HibernatingRhinos.Profiler.Appender.CustomNHibernateLog4Net.dll

Now you can use this dll in order to initialize the profiler using the configuration file.

Published at

Originally posted at

Entity Framework Profiler support for Entity Framework 6 alpha 2

Recently a lot of our users asked us to support Entity Framework 6 alpha 2 in Entity Framework Profiler. We started to look on how we can support it right away but we found that we could not support it easily.

The main issue was that Entity Framework 6 alpha 2 expose a class called DbConfiguration which you can use as an injection point to EF. But the way that EF currently works, it looks for this class only in the same assembly that uses the DbContext, which is your application executable/dll but not our appender dll.

So we contact the EF team (which they were very helpful and responsive) and discussed some of the issues that we saw. While a few of the issues was been addressed already by the EF team and committed to the EF repository, the good news that we found a way to support Entity Framework 6 alpha 2 by adding just one source file to your application.

Take the following file and add to your application: https://gist.github.com/4539561.

Now make sure to use the use build #2111 or later of Entity Framework Profiler and it will work.

I’m also attaching the same source code from the above gist to here. Enjoy using Entity Framework Profiler!

using System;
using System.Collections.Generic;
using System.Data.Common;
using System.Data.Entity.Config;
using System.Data.Entity.Core.Common;
using System.Data.Entity.Core.EntityClient;
using System.Data.Entity.Infrastructure;
using System.Data.Entity.Migrations.Sql;
using System.Data.Entity.SqlServer;
using System.Reflection;
using HibernatingRhinos.Profiler.Appender.EntityFramework;
using HibernatingRhinos.Profiler.Appender.ProfiledDataAccess;

namespace HibernatingRhinos.Profiler.IntegrationTests.EntityFramework6Beta2
{
    public class ProfiledDbConfiguration : DbConfiguration
    {
        public ProfiledDbConfiguration()
        {
            AddDependencyResolver(new ProfiledDbDependencyResolver(this));
        }
    }

    public class ProfiledDbDependencyResolver : IDbDependencyResolver
    {
        private readonly IDbDependencyResolver rootResolver;

#if DEBUG
        public static HashSet<string> types = new HashSet<string>();
#endif

        public ProfiledDbDependencyResolver(DbConfiguration originalDbConfiguration)
        {
            // Get the original resolver
            var internalConfigProp = originalDbConfiguration.GetType().GetProperty("InternalConfiguration", BindingFlags.Instance | BindingFlags.NonPublic);
            var internalConfig = internalConfigProp.GetValue(originalDbConfiguration, null);
            var rootResolverProp = internalConfig.GetType().GetProperty("RootResolver", BindingFlags.Instance | BindingFlags.NonPublic | BindingFlags.Public);
            rootResolver = (IDbDependencyResolver)rootResolverProp.GetValue(internalConfig, null);
        }

        public object GetService(Type type, object key)
        {
#if DEBUG
            types.Add(type.Name);
#endif

            if (type == typeof(IDbProviderFactoryService))
            {
                var innerFactoryService = (IDbProviderFactoryService) rootResolver.GetService(type, key);
                return new ProfiledDbProviderFactoryService(innerFactoryService);
            }

            if (type == typeof (DbProviderServices))
            {
                var inner = (DbProviderServices)rootResolver.GetService(type, key);
                var appender = new EntityFrameworkAppender(typeof (SqlProviderServices).Name);
                var profiledDbProviderServicesType = EntityFrameworkProfiler.CompiledAssembly.GetType("HibernatingRhinos.Profiler.Appender.EntityFramework.ProfiledDbProviderServices");
                if (profiledDbProviderServicesType == null)
                    throw new InvalidOperationException("Could not get the profiled DbProviderServices.");
                return Activator.CreateInstance(profiledDbProviderServicesType, new object[] {inner, appender});
            }

            if (type == typeof(MigrationSqlGenerator))
            {
                if (rootResolver.GetService(type, key) is SqlServerMigrationSqlGenerator)
                {
                    return new ProfiledMigrationSqlGenerator();
                }
            }

            return null;
        }
    }

    public class ProfiledDbProviderFactoryService : IDbProviderFactoryService
    {
        private readonly IDbProviderFactoryService innerFactoryService;

        public ProfiledDbProviderFactoryService(IDbProviderFactoryService innerFactoryService)
        {
            this.innerFactoryService = innerFactoryService;
        }

        public DbProviderFactory GetProviderFactory(DbConnection connection)
        {
            if (connection is ProfiledConnection)
            {
                var connectionType = connection.GetType();
                if (connectionType.IsGenericType)
                {
                    var innerProviderFactory = connectionType.GetGenericArguments()[0];
                    var profiledDbProviderFactory = EntityFrameworkProfiler.CompiledAssembly.GetType("HibernatingRhinos.Profiler.Appender.EntityFramework.ProfiledDbProviderFactory`1").MakeGenericType(innerProviderFactory);
                    return (DbProviderFactory) Activator.CreateInstance(profiledDbProviderFactory);
                }
            }

            if (connection is EntityConnection)
                return innerFactoryService.GetProviderFactory(connection);

            throw new InvalidOperationException("Should have ProfiledConnection but got " + connection.GetType().FullName + ".If you got here, you probably need to modify the above code in order to satisfy the requirements of your application. This code indented to support EF 6 alpha 2.");
        }
    }

    public class ProfiledMigrationSqlGenerator : SqlServerMigrationSqlGenerator
    {
        protected override DbConnection CreateConnection()
        {
            return DbProviderFactories.GetFactory("System.Data.SqlClient").CreateConnection();
        }
    }
}
Tags:

Published at

Originally posted at

Hibernate Profiler now supports Hibernate 4

Starting from build #2112, Hibernate Profiler now fully supports Hibernate 4.

Instructions how to setup Hibernate Profiler can be found here: http://hibernatingrhinos.com/products/hprof/learn/general/gettingstarted.

We think to provide a maven package for Hibernate Profiler for you, so instead of downloading Hibernate Profiler by hand you’ll be able to use maven in order to get the binaries. Would you like to see this feature coming? Tell us in the comments!

Tags:

Published at

Originally posted at

Comments (4)

Uber Prof Production Profiling–Profiling production application on Azure

In this post I’m going to cover one of the great features of Uber Prof v2.0: production profiling of remote applications running on Windows Azure.

The production profiling feature is currently supported in the following profilers: NHibernate Profiler, Entity Framework Profiler, Linq to SQL Profiler, LLBLGen Profiler. In this post I’m going to demonstrate how to use this with the Entity Framework Profiler.

First, we need to create our project. Since we want to host the project on Windows Azure, I’ll use an ASP.NET MVC + Web API project.

ProductionProfiling1

The next thing is to enable production profiling. In order to so, we need to install the EntityFrameworkProfiler.Production NuGet package.

ProductionProfiling2

As a side note, if you wonder what is the other package called EntityFrameworkProfiler, you probably want to read this post.

After the package is installed, you can see that it created the following file in our project:

using System;
using System.IO;
using System.Web.Http;
using HibernatingRhinos.Profiler.Production;

[assembly: WebActivator.PreApplicationStartMethod(typeof(ProductionProfilingOnAzureSample.App_Start.EntityFrameworkProfilerProductionBootstrapper), "PreStart")]
namespace ProductionProfilingOnAzureSample.App_Start
{
    public static class EntityFrameworkProfilerProductionBootstrapper
    {
        public static void PreStart()
        {
            // Initialize the profiler with the production profiling feature. 
            // Production profiling let's you let you see profiling information remotely using the following URL: http://your-server/profiler/profiler.html
            string license = GetResource("ProductionProfilingOnAzureSample.App_Start.EntityFrameworkProfilerLicense.xml");
            ProductionProfiling.Initialize(license, GlobalConfiguration.Configuration);
        }

        private static string GetResource(string sourcesResource)
        {
            using (var sourceCodeStream = typeof(EntityFrameworkProfilerProductionBootstrapper).Assembly.GetManifestResourceStream(sourcesResource))
            {
                if (sourceCodeStream == null)
                    throw new InvalidOperationException(string.Format("Resource file is missing: {0}", sourcesResource));
                return new StreamReader(sourceCodeStream).ReadToEnd();
            }
        }
    }
}

This class will run on the startup of your project and will expose a URL endpoint that we can use to load the profiler UI. In this case we’re using the GlobalConfiguration.Configuration from the WebAPI Web Host package, but please note that you can also expose your own self hosted server in order to production profile a different types of projects, like a service, or what not…

Also note that we’re embedding the license file as an embedded resource in our project and we’re passing the license string to the Initialize method of the ProductionProfiling class. This is needed since the profiler is now split into to parts, the server – which is embedded right into your application and will run within your application own process, and the UI client which is a Silverlight application that you can get from the remote server through your browser. So again, the license string is passed and evaluated right from the server initialization process.

Now it’s time to run our project. I’m running it but get the following exception: …

Resource file is missing: ProductionProfilingOnAzureSample.App_Start.EntityFrameworkProfilerLicense.xml

Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
Exception Details: System.InvalidOperationException: Resource file is missing: ProductionProfilingOnAzureSample.App_Start.EntityFrameworkProfilerLicense.xml
Source Error:

Line 22: 			{
Line 23: 				if (sourceCodeStream == null)
Line 24: 					throw new InvalidOperationException(string.Format("Resource file is missing: {0}", sourcesResource));
Line 25: 				return new StreamReader(sourceCodeStream).ReadToEnd();
Line 26: 			}

Source File: c:\Users\Fitzchak\Documents\Visual Studio 11\Projects\ProductionProfilingOnAzureSample\ProductionProfilingOnAzureSample\App_Start\EntityFrameworkProfilerBootstrapper.cs    Line: 24

Remember what I told you? We need to embedded a license file. So let’s go and ask for a trial license file: http://hibernatingrhinos.com/products/efprof/trial.

After that I’m doing that, I got a trial license in my email inbox. Now we need to include this in the project as an embedded resource.

ProductionProfiling3

And now we are ready to see the profiler in action. Run the project and browse to the /profiler/profiler.html endpoint:

ProductionProfiling4

Do you see that? This is Entity Framework Profiler running right in your browser and let you profile your remote production application right from your computer!

You may be wonder why we do not see actual data in the profiler yet but this is your work to do. Attach it to your application and you’ll see some nice information that will let you understand what your ORM is doing easily.

Here is an example:

ScreenShots-EFProf-InBrowser (1)

Wait a second… Didn’t you said that we’re going to run this on Azure?

Yes, I added the following code to the HomeController and pushed the code to azure:

public class HomeController : Controller
{
    public ActionResult Index()
    {
        for (int i = 0; i < 2; i++)
        {
            using (var context = new ApplicationContext())
            {

                context.Messages.Add(new Message {Text = "Text 1", CreatedAt = DateTime.UtcNow});
                context.SaveChanges();

                var messages = context.Messages.ToList();

                var messages2 = context.Messages
                                        .Where(message => message.Text == "Profiling on Azure" && message.Text.Length > 10)
                                        .ToList();
            }
        }
        return View();
    }
}

You can run the above code by opening the following page,  http://efprof.cloudapp.net/, and see the profiler in action here: http://efprof.cloudapp.net/profiler/profiler.html.

ProductionProfiling6

For any feedback, please contact us in the profilers forums, by email (support at hibernatingrhinos.com) or just leave a comment here.

Tags:

Published at

Originally posted at

Comments (3)

What’s new in Uber Prof v2.0?

Along with the launch of our new website hibernatingrhinos.com, we’re happy to announce that v2.0 of Uber Prof profilers is now in beta!

While we’re perfecting the product, you can take advantage on that and buy a license now with a 20% discount. This will work for both monthly license and standard license.

You can also request a trial license on the website, which will let you use the profiler for 30 days.

Now for the interesting staff, what’s new in Uber Prof v2.0?

1. Silverlight interface. When you run the profiler.exe the UI that opens is a OOB Silverlight application. You can also open the profiler interface in your browser, which means that you can profiler remote systems! In fact, we ported all of the WPF user interface of v1.0 to Silverlight in order to enable production profiling of remote systems.

ScreenShots-NHProf-StatementDetails

2. Production profiling. We now supports profiling production systems, using the HibernatingRhinos.Profiler.Production.dll. If you’re using ASP.NET MVC, you will be able to configure the profiler to use the following route “/profiler” in order to see the profiler interface in your browser.

ScreenShots-EFProf-InBrowser

3. Performance. We tuned to performance of the profiler and the profiler now it far more responsive. You can run the profiler now for days, and it will still work.

ScreenShots-NHProf-SessionFactoryStatistics

4. Running on the cloud. We did some work in order to let you profile your application on Azure. You can profiler your cloud application right from your browser, using the production profiling feature.

The next post will contains instructions how to run the profiler in a production application, on Azure.

Tags:

Published at

Originally posted at

Comments (1)

Feature explained–Spatial Geocode from address

In August we added the option in spatial queries to get the Geocode by address.

We already did something like that in the “Events” page in the RavenDB website and It was quite easy with the Google Maps API.

However when we tried to implement it in the studio we discovered that to do that in Silverlight is not as simple.

Apparently in order to get Google API to work with Silverlight you need to set a bridge server that has the appropriate clientaccesspolicy.xml because the settings on the Google API server does not match the permissions required for Silverlight.

So we looked for another service that has the settings that allowed to work with Silverlight without any complications, what we found is that the Yahoo! Maps Geocoding does just that.

The setting of the Yahoo API was even easier then Google’s and is done like this:

   1: var url = "http://where.yahooapis.com/geocode?flags=JC&q=" + queryModel.Address;
   2:             var webRequest = WebRequest.Create(new Uri(url, UriKind.Absolute));
   3:             webRequest.GetResponseAsync().ContinueOnSuccessInTheUIThread(doc =>
   4:             {
   5:                 RavenJObject jsonData;
   6:                 using (var stream = doc.GetResponseStream())
   7:                 {
   8:                     jsonData = RavenJObject.Load(new JsonTextReader(new StreamReader(stream)));
   9:                 }
  10:  
  11:                 var result = jsonData["ResultSet"].SelectToken("Results").Values().FirstOrDefault();
  12:  
  13:                 if (result != null)
  14:                 {
  15:                     queryModel.Latitude = double.Parse(result.Value<string>("latitude"));
  16:                     queryModel.Longitude = double.Parse(result.Value<string>("longitude"));
  17:                 }
  18:  
  19:             }).Catch();
  20:  

As you can see it is simply a web request with a default url with the addition of the address

For mode information about the Yahoo geocode API: http://developer.yahoo.com/maps/rest/V1/geocode.html

New Option in the RavenDB Studio–Patching

Not long ago we added the option to do Eval Patching in RavenDB (see: http://ayende.com/blog/157185/awesome-ravendb-feature-of-the-day-evil-patching)

Recently we added this option to the studio.

Now when you open the studio you will see a new tab in the options for “Patch”

image

Once there you will see the next screen:

image

 

As you can see this page has many options so lets look at them separately

image

At the top left you have this toolbar with several options, the first one is a combo-box where you can choose what you want to patch (“Document”, “Collection” or “Index”).

Next to it you have the “Test” Option, with this option (which will only be available after selecting a document to check the patch on) when we click on this button the fields on the bottom of the screen (titled “Before Patch” and “After Patch”) will be populated with the selected document and how will it be after we patch.

Next we have the “Save” and “Load” options. With those you can save this page setting and load them again for later use.

Next we have the Patch option that executes the Patch command (this will save the changes to the database)

Let’s look at a small example of patching:

In this case I want to change the Genre name from “Rock” to “Rock and Roll”, after I pressed “Test” I got the screen below, as you can see the “Name” is different.

image

as you can see on the right of the script we have the “Parameters” list, in there I can do this patch in a different way:

image

Now I as you can see we added another value to the document with the help of parameters.

 

Now lets go over the changes when using “Collection”

image

In “Collection” you can see that the “Patch” option changed into 2 options: “Patch Selected” and “Patch All”.

In order to patch only some of the items in the collection you can select then in the “Matching Documents”  section that was added to the page.

Another thing we have is the option to select on which collection we want to patch.

 

And last we have the option to patch by index:

image

In addition to the changes we had for “Collection” we now have a field “Query” for the index and the patch will apply only to the documents that match this query (and the “Matching Documents” will update accordingly).

Update in RavenDB: A new statistics page

Lately we updated the statistics page in RavenDB Studio.

This is how the new page looks:

clip_image002

If you look you can see that the data we see in "Artists" is different than what we see in "AlbumsByCountSold" this is because "Artists" has data on reduce and we want to see that.

Another thing we added is the option to view statistics in groups, In the top left you can see a ComboBox in there you can choose from several options:

clip_image003

If you select "Single Items" you will see all the data that is not a part of a list:

clip_image004

For each index in your database you will see an item in the ComboBox that will show the stats for this index

clip_image006

In addition you can view all the indexes.

If you have stale indexes another section will be added to the page:

clip_image008

If you'll press the stale index name you will see the stats for this index.

Another thing we have added is that in the query page if you are using an index (not dynamic query) you can go directly to this index statistics

clip_image010

The most right button will send you to the statistics page with the data for this index displayed.

Published at

Originally posted at

New feature in RavenDB Studio: get spatial geocode from address

For a while now we had the option to do spatial search, in order to do that we need the longitude and latitude of center we want to search around.

However most of the time that we really want to check is result round a specific location (we don’t really care about the longitude and latitude but the address they represent) so usually you would look in some site that you input an address and you get the longitude and latitude.

With the new feature in RavenDB Studio we let you enter an address and we update the longitude and latitude for you:

The new UI:

clip_image002

We enter an address:

clip_image004

And after we press the "Calculate from address":

clip_image006

It is that simple.

In my next post, I’ll discuss exactly how we did it. In turned out that it isn’t as trivial in Silverlight as we might want.

Published at

Originally posted at

Dealing with conflicts in RavenDB studio

Recently we added the option to manage bundles in the studio (see http://blogs.hibernatingrhinos.com/12577/ravendb-1-2-studio-features-the-new-database-wizard).

One of these bundles is the Replication bundles, sometimes with replication a conflict can happen (this could happen by several reason like 2 or more documents with the same id being created on different destinations), the way RavenDB deals with this is by creating one document to each of the versions available of the document and in the “original” document we have the names of these copies.

With the new conflict resolution UI when you try to edit the original document you will receive a suggestion of resolution.

Let create a conflict on purpose and see some of the options:

In order to create the conflict we will create 2 databases on the same server, both with replication bundle enabled but we will not set the destination yet:

clip_image002

Now we will create a document on each database with the same Id:

 

clip_image004 clip_image006

Now we will edit the destinations on the left and connect it to the right :

clip_image008

Now we will go to the destination and look the the documents list:

clip_image010

As you can see we have 3 documents with Id starts with “Example/1”.

Now we get to the interesting part if we go and try to edit “Example/1” we will get the next page:

clip_image012

The Studio realizes that we wanted to edit a document that had a conflict and gave us a suggestion, as long as there are comments on the document you will not be able to save it.

Now you can see that RavenDB Studio saw an issue with the Name, we give you a list of all possible names and all you need to do is delete what you don’t want there.

Let say we want it to be “Hibernating” we will delete what we don’t want and save:

Before save:

clip_image014

After save:

clip_image016

Now if we take a look at the documents list you can see that we are left with just the fixed version:

clip_image018

Now let look what RavenDB will suggest for several other conflicts:

Example 2 – Same Item:

clip_image020 clip_image021

The result:

clip_image023

Notice that we still let you know that a conflict has accrued but you have not comments in there which means you can simply save to resolve the conflict

Example 3 – List With Different Items

clip_image025 clip_image027

The result:

clip_image029

We put both items in the array but add the comment so that the user will know that the data was changed.

Example 4 – List with some items the same and some different

clip_image031 clip_image033

The result:

clip_image035

We combined all the data into one array with no repeats and inform on the change.

For more about conflicts read: https://ravendb.net/docs/server/scaling-out/replication/handling-conflicts

Creating a Wizard in Silverlight

When creating the new “Create New Database” for the studio we ran into an issue with showing several child windows one after the other (and keeping in mind that if in one page cancel was click we don’t want to show the next windows) like an install wizard.

Unfortunately Silverlight does not have anything like that so we needed to create on for ourselves.

First thing we had in mind when we came to approach this is that Silverlight works asynchronously which means that we want to wait for a page to close before we open the next one.

The way we dealt with that was listening to every windows Closed event and on close open the next page (if exists) the Wizard class looks like this:

clip_image002

The ShowNext() method will check to see if we have more windows to show and show the next one or return with the result if we are done.

We use a TaskCompletionSource and the TPL to make sure that all of that is wrapped in a nice Task, and that is pretty much it. With that done, we can start using it. You can see how easy it is here:

clip_image004

Published at

Originally posted at

Dynamic Tab View In Silverlight

With the new “Create New Database” feature to the RanvenDB Studio we added the option to edit your bundles.

We wanted something that seemed to be quite strait forward: A tab view that we could dynamically choose which tabs a visible and which are not.

At first we tried doing it with the TabControl and bind the header visibility of each TabItem to a property in the model that let us know if this database has that bundle.

This is how I had done it:

clip_image002

At first it seemed to work, however when looking closely we noticed something like this:

clip_image004

clip_image006

As you can see, in the header we only have “Versioning” but the tab we see is the “Quotas Tab”

This happened because we hid the header but the item itself was still the “SelectedItem”.

So we tried to bind the “SelectedItem” as well, but apperantly there is an issue with TabControl and MVVM.

Finally we decided to create our own implementation of Tab View and this is how we did it:

clip_image008

In the Model size we have:

clip_image010

The Bundles Collection is updated with the list of bundles that are active for this database.

One other this we needed to add was to initialize the SelectedBundle to be the first one in the list.
We did that where we initialize the Bundles collection like that:

clip_image012

RavenDB 1.2 Studio Features: The New Database Wizard

In the new version of the RavenDB Management Studio we wanted to enhance the options you have, one of these enhancements is the new "Create New Database" wizard.

We wanted to add the option to choose bundles for your database, configure various options and in general give you a easy way to control options that you could previously do only if you mastered the zen of RavenDB.

Now then you select the New Database option from the databases page you will get the next window:

clip_image002

If you check the “Advanced Settings” option you will get this window:

clip_image004

Now you can change the location where the data (path), logs and indexes will be stored.
In some cases putting the data, logs and the indexes in different drives can improve performance.

On the left side you have the Bundles Selection area, every bundle that you select here will be added to your database. More data about bundles: https://ravendb.net/docs/server/bundles

Some of the bundles have settings that you can (and should) configure:

Encryption:

clip_image006

In here you can choose you encryption key (or use the random generated on) after the database creation a pop up with the key will show you the key on last time. Make sure you have this key saved somewhere; after that window will close we will not give you the key again.

Quotas Bundle

clip_image008

In here you can set the limits for quotas (see http://ravendb.net/docs/server/bundles/quotas)

Replication Bundle

clip_image010

In here you can set the replications (and add as many as needed)
(see http://ravendb.net/docs/server/scaling-out/replication)

Versioning

clip_image012

In here you can set the versioning (see http://ravendb.net/docs/server/bundles/versioning)

Notice that you can’t remove the default configuration or change the Collection for it but you can edit the values.

After you will press “OK” the database will be created (if you have the encryption bundles you will now see the encryption key one last time)

If in the future you want to edit the setting to the bundles (for quotas, replication and versioning) you can right-click the database in the databases view and select “edit bundles”

clip_image014

Note: you cannot change which bundles are part of the database after creation.

Published at

Originally posted at

Does a blog post have an image?

Following on my previous post, a different requirement that I may have, is to have a picture for each blog post. I say may have, because this is how the designer designed the website. In the HTML that I got, each blog post has a picture. But in practice, none of Hibernating Rhinos blog and Ayende blog has a picture for each blog item. This is not even a requirement for those blogs. After all, blog posts are all about text, so it perfectly valid to have a blog post without an image.

At fist I thought to just change the design and remove the picture. But than I thought well some of the blog posts do have an image or two in them. What I just parse the blog content and look for an <img /> tags there? I can use them! It may be worth to test this.

A quick search led me to this answer on StackOverflow.

Based on that, I modified the code in the previous post to the following:

RegexImg

Very simple. I’ll wait to see how this will behave in practice, with real data. And now I’m thinking that I can even improve this further, be query the actual images pick the best one that match the target dimensions.

I like this too.

Published at

Originally posted at