top of page
Search
  • delvja01

Week 6

Day 1

Today I was mainly focused on getting the rest of the scalar data types to bind to their param names. I was trying to get them all added to a bson document with their name and value stored in a row. For the scalar datatypes this was fairly straightforward. There was a couple of conversions I had to make and am still looking into as well. The signed int seemed to be a 1:1 conversion in MongoDB, but the unsigned int didn't seem to be supported. For this case I just simply casted it to a signed int that was positive and added it to the document. I think I will have to do some more documentation on the translation of data types, but for now I just want to get things sort of working. While messing with the data types and testing the bind params functions I also learned a lot about ECL types. You can declare how big of a string or int or other data type that you want by putting a number at the end of the type name. This will change the data that gets sent to the plugin so it was important to take that into account even if it is up to the user to decide.

Right now what happens is the plugin takes all of the parameters and adds them to a bison document to store them. This is useful because then the document can be simply inserted into a collection or passed to find to search a collection for a matching document. It is possible that this could limit future functionality, but the bson document builder is pretty useful and there is a way to get the rows from the document making it more adaptable to other uses that I haven't seen yet.

Currently, a function can be created that will call insert one and will insert one document that contains all of the params passed of most types. I tested it by adding more and more types to see how they are handled by the document builder and it seems like everything is getting added successfully.


Day 2 & 3

Today I wanted to get all of the scalar values working. I thought strings would be the most difficult, so I tried to save them for last but it actually wound up being decimal that was giving me the most trouble. The value for the DECIMAL data type gets passed in as a void pointer. That was frustrating enough because why do they need to use void now I have to cast the pointer to a different type before I can use it. I wish it would have been that simple. I tried for way too long to see the data that was being pointed to but to no avail. It seemed like it was pointing to junk, but I referenced couchbase's implementation of the bindDecimalParam method and I tried what they did and it didn't work. Now I was not only frustrated, but I was also confused. How could this pointer not be pointing to anything and if there was actually something there why couldn't I see it? I thought I was just misunderstanding how to use void pointers, but I don't think that I am. A void pointer is just a pointer to a memory address without a datatype associated with it. That means that you should be able to look at what is there if you cast it to a data type. I even went into normal gdb in the command line and tried printing the pointer after casting it to a bunch of different types. After having no luck I decided that maybe I was just missing something and would come back to that when I dive deeper into the data types part because right I am just focusing on building the documents for inserting and finding.


Since the data type conversions will be a larger problem and will need some more serious research I wanted to focus on getting ECL datasets to be bound, so I started looking into the bindDatasetParam method and looking through the couchbase code. It was pretty much the same and the problem I was having was that the data from the Dataset row was coming in but I couldn't see any other information like the types of the values or names.


Day 4 & 5

Today I was focused on figuring out how the datasets get passed in and what I need to do with them. I tried following it in the debugger, but it kept going into files that it couldn't find. I realized this was an issue with the build and after rebuilding and installing the platform build again I could go into the files with the process function. When an ECL DATASET is passed as a function argument the MongoDBEmbedFunctionContext object calls bindDatasetParam. That function then creates a new MongoDBDatasetBinder and takes some pointers as function arguments. I had to look through the couchbase code to see what all of the objects that got passed in were for. I'm still not really sure what some of them do, but I can look at there class definitions easily since I'm using VS code which is really nice.

The problem is that the debugger goes in and I can see the row in memory, but when the process function gets called it doesn't do anything with the variables. I was trying to use the same functions that the couchbase plugin used but I am not sure they were doing anything. It could be down to me not initializing the RowBuilder object correctly because somewhere down the line it makes a copy of a pointer to some object that is part of the MongoDBRecordBinder class.


When I look at the object that is misbehaving it says that it is a TtlTypeInfo object and I have no idea what that means. The comment for it says that it is the core interface for the field meta information.

24 views0 comments

Recent Posts

See All

Week 10

After getting a parser for bson documents working it is time to start setting up the function calls to build these documents and pipelines. There are a lot of functions that I want the plugin to be ab

Week 9

This week my focus is on a parser that will build MongoDB documents and support everything MongoDB supports. I think the builder that is best for this is the stream builder. In there documentation the

Week 8

Today I was still having a few issues getting the RowStream class to return a result from the MongoDB query. The problem that I was having was that I needed to keep track of the results from the query

Post: Blog2_Post
bottom of page