mongodb-user
[Arriba] [Todas las Listas]

Re: [mongodb-Usuario] mongodb los registros viejos que pierden

To: mongodb-user <mongodb-user@xxxxxxxxxxxxxxxx>
Subject: Re: [mongodb-Usuario] mongodb los registros viejos que pierden
From: Asya Kamsky <asya@xxxxxxxxxxx>
Date: Mon, 31 Aug 2015 23:01:35 -0700
Delivery-date: Tue, 01 Sep 2015 02:12:30 -0400
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlegroups.com; s=20120806; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:x-original-sender:x-original-authentication-results :reply-to:precedence:mailing-list:list-id:x-spam-checked-in-group :list-post:list-help:list-archive:sender:list-subscribe :list-unsubscribe; bh=EF+ZloPp3mxgkw0BD3oo97e0P0GQQWFr8Dm0kG5B18A=; b=xmvykb84TsBWavUeSOU2raXTgwLfssK6BuJf84++HkTjIkFWfXE/y17QlaR3fNsV9d EhgqiJOS6CED9TLTVezbQkXHFdnGCW9JM8ygt37Bknl067kPK74daWF6m/iDykDqvcCD MfTSrd4G+9CT5Pti3p8/nBt6ZvK9iCswPvHLCRM0UpOhQX5gZJfKeKirly0qfPdZjTK+ SlW6sFbW5M5E5Al/AQVWADBDnC1zLoyA79FzBh+isvnbtCHsntCDpVi1E31hts/4zYhY XPNPUFLqd3n97GLFpExTSYIX3zIa1up1hFC4pYWYpN6CmVF16SFcBIFN6LZpk1jECCYh uIkQ==
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mongodb.com; s=google; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:x-original-sender:x-original-authentication-results :reply-to:precedence:mailing-list:list-id:x-spam-checked-in-group :list-post:list-help:list-archive:sender:list-subscribe :list-unsubscribe; bh=EF+ZloPp3mxgkw0BD3oo97e0P0GQQWFr8Dm0kG5B18A=; b=PJA/KHDEC8+XHO2dmEj6FWUao+oq+GS96EFJ22FWxByN0MUm1pjDE8IuNgY70JBb/l cwaXi8GqVeO0ra4e+VcuK6dvYX9evf/HqMryH35DIE0BttaOHODycyRPwCraiy3NWoQK X+M7pS6Laju4XhtukM6YzdHZxPSnyGSoAykDE=
Envelope-to: traductor@xxxxxxxxxxx
In-reply-to: <6f2898d9-6d00-4ff7-af8e-4d4abab4b523@googlegroups.com>
List-archive: <http://groups.google.com/group/mongodb-use>
List-help: <http://groups.google.com/support/>, <mailto:mongodb-user+help@googlegroups.com>
List-id: <mongodb-user.googlegroups.com>
List-post: <http://groups.google.com/group/mongodb-user/post>, <mailto:mongodb-user@googlegroups.com>
List-subscribe: <http://groups.google.com/group/mongodb-user/subscribe>, <mailto:mongodb-user+subscribe@googlegroups.com>
List-unsubscribe: <mailto:googlegroups-manage+1044811755470+unsubscribe@googlegroups.com>, <http://groups.google.com/group/mongodb-user/subscribe>
Mailing-list: list mongodb-user@xxxxxxxxxxxxxxxx; contact mongodb-user+owners@xxxxxxxxxxxxxxxx
References: <1d24335b-cd66-4f02-80f4-b9be65d91cc0@googlegroups.com> <c0342334-554d-4c65-b5fd-b811ce71f419@googlegroups.com> <6ca93621-854a-4bcc-87bc-26faef6b6de7@googlegroups.com> <f7bfdcca-09cb-4d77-95fd-6028b31c4edb@googlegroups.com> <CAOe6dJCcHUpLvpy1PRwYZTTm2ZuK2mj0WPcjLsJf0juzJ+Rk_A@mail.gmail.com> <aeb293c7-4c45-4134-b437-fcb0fe6aba88@googlegroups.com> <CAOe6dJCY4+vp8Rzoq1f0eF0mM1f_WO5pVYeYp8MswMETy+2KFQ@mail.gmail.com> <e97e3f3e-7a54-4a56-8238-8551bd7e2985@googlegroups.com> <CAOe6dJAPPXWWya3yZLBz2V0FuVOD3+8cFYEK9-5XQ6RKTDjhbg@mail.gmail.com> <16893425-2e36-42a3-902a-70a203592200@googlegroups.com> <CAOe6dJC369fjTi=gjGK0ygtP-ra=Gv=jP3Hzg=RfwRKHD7UKfA@mail.gmail.com> <624bfdbb-2e6d-4640-9976-5741c6a4724d@googlegroups.com> <CAOe6dJAaVRxpb-0=nHwOO9XWYwHdwWdv03g6y9at_X5jnhjhig@mail.gmail.com> <1e0efcf0-c0e3-4cd5-a9fd-7a21844fe44f@googlegroups.com> <CAOe6dJAkrZjN5iCfONbEPqGB9CctH+qL4=gDxr0gii+rb7nGwg@mail.gmail.com> <f19c3639-e220-4e27-a0bb-f1b808df823b@googlegroups.com> <CAOe6dJBqY4MFrRkRVUvmmjaP2HcESkYpQG_yqpbUJ8eqaQW-Dg@mail.gmail.com> <512099b8-45dd-4908-9560-d1f85cfda6b0@googlegroups.com> <ea02d25c-7c17-47f1-a443-c80956fbc5cd@googlegroups.com> <CAOe6dJAtVac-ezkKNhTbdY+rsNp_cxqiwoY1pm=LqJ1ePQetwA@mail.gmail.com> <6f2898d9-6d00-4ff7-af8e-4d4abab4b523@googlegroups.com>
Reply-to: mongodb-user@xxxxxxxxxxxxxxxx
Sender: mongodb-user@xxxxxxxxxxxxxxxx
Cuándo construyo *mdbundo mi *compile parece esto, bien abajo es el vuestro:

*cc -*o *mdbundo -Pared -*Werror -*O0 -*ggdb -yo/*usr/local/incluye/*libbson-1.0
 -*L/*usr/local/*lib -*lbson-1.0   *mdb.*c *mdb.*h *mdbundo.*c

*cc -*o *mdbundo -Pared -*Werror -*O0 -*ggdb -yo/*usr/local/incluye/*libbson-1.0
 -*L/*usr/local/*lib -*lbson-1.0   *mdb.*c *mdb.*h *mdbundo.*c

De modo que miradas igual que significa alguna clase del entorno no puede
haber sido montado la misma manera.
Te supongo construye *libbson sin errores, y exportado *PKG_*CONFIG_CAMINO?

Lo es posible tienes una versión diferente de *libson instalado de vuestros
intentos anteriores?

*Asya


En *Mon, *Aug 31, 2015 en 6:32 AM,  <tobangi@xxxxxxxxx> escribió:
> *Thanks -- útil.    Todavía consiguiendo enganchado en el *compilation de *mdbundo.
>
> Sigo vuestros pasos exactos (incluyendo consiguiendo *libbson de la fuente especificaste).   Pero en "marca *mdbundo" consigo un *compilation error (ve abajo).
> Cualesquier pensamientos?
>
> *Thanks!
> *T
>
> *cc -*o *mdbundo -Pared -*Werror -*O0 -*ggdb -yo/*usr/local/incluye/*libbson-1.0
> -*L/*usr/local/*lib -*lbson-1.0   *mdb.*c *mdb.*h *mdbundo.*c
> /*tmp/*cc1*rVoR4.*o: En función `*db_*init':
> /casa/*brejicz/*src/*mdb-maestro/*mdb.*c:177: *undefined referencia a `*bson_*strdup_*printf'
> 
> /casa/*brejicz/*src/*mdb-maestro/*mdb.*c:179: *undefined referencia a `*bson_libre'
> /casa/*brejicz/*src/*mdb-maestro/*mdb.*c:182: *undefined referencia a `*bson_libre'
> /casa/*brejicz/*src/*mdb-maestro/*mdb.*c:189: *undefined referencia a `*bson_*strdup_*printf'
> 
> /casa/*brejicz/*src/*mdb-maestro/*mdb.*c:191: *undefined referencia a `*bson_libre'
> /casa/*brejicz/*src/*mdb-maestro/*mdb.*c:195: *undefined referencia a `*bson_*realloc'
> 
> /casa/*brejicz/*src/*mdb-maestro/*mdb.*c:197: *undefined referencia a `*bson_libre'
> /casa/*brejicz/*src/*mdb-maestro/*mdb.*c:218: *undefined referencia a `*bson_libre'
> /*tmp/*cc1*rVoR4.*o: En registro `de función_*bson':
> /casa/*brejicz/*src/*mdb-maestro/*mdb.*c:608: *undefined referencia a `*bson_*init_*static'
> 
> /*tmp/*ccvogrxm.*o: En función `*fixup_*bson':
> /casa/*brejicz/*src/*mdb-maestro/*mdbundo.*c:28: *undefined referencia a `*bson_*init_*static'
> 
> /casa/*brejicz/*src/*mdb-maestro/*mdbundo.*c:32: *undefined referencia a `*bson_*iter_*init'
> 
> /casa/*brejicz/*src/*mdb-maestro/*mdbundo.*c:36: *undefined referencia a `*bson_*iter_próximo'
> 
> /*tmp/*ccvogrxm.*o: En la función `consigue_*bson_en_*loc':
> /casa/*brejicz/*src/*mdb-maestro/*mdbundo.*c:85: *undefined referencia a `*bson_*init_*static'
> 
> /casa/*brejicz/*src/*mdb-maestro/*mdbundo.*c:89: *undefined referencia a `*bson_valida'
> 
> /*tmp/*ccvogrxm.*o: En función `*mdbundo':
> /casa/*brejicz/*src/*mdb-maestro/*mdbundo.*c:123: *undefined referencia a `*bson_conseguir_datos'
> 
> recoge2: *ld regresó 1 estado de salida
> marca: *** [*mdbundo] Error 1
>
>
> En domingo, agosto 30, 2015 en 9:02:35 PM *UTC-4, *Asya *Kamsky escribió:
>>
>> Aquí es lo que sé trabajos.
>>
>> Descarga 1.0.0 de *libbson.    Lo puedes conseguir de este *url:
>> *https://*github.*com/*mongodb/*libbson/*archive/1.0.0.*zip
>> *Unzip lo, *cd a él, suponiéndote tiene todas las  dependencias instaladas
>> (*compiler, *etc) corrido
>> ./*autoconf.*sh
>> Marca
>> *sudo la marca instala
>>
>> Entonces descarga *mdb la misma manera de *https://*github.*com/*chergert/*mdb/*archive/Maestro.*zip
>> 
>> *unzip Y *cd a él
>> exporta *PKG_*CONFIG_CAMINO=/*usr/local/*lib/*pkgconfig
>> marca *mdbundo
>>
>> Ahora tendrías que tener un *executable llamó ./*mdbundo
>> Puedes necesitar hacer esto para correrlo:
>> CAMINO de BIBLIOTECA_de LD_de exportación=/*usr/local/*lib
>>
>> ./*mdbundo --Uso
>> de ayuda: *mdbundo *DBPATH *DBNAME *COLNAME
>>
>> *DBPATH es el camino a los archivos originales, *DBNAME es *myfs entonces harías esto dos veces una vez para archivos y una vez para *chunks.
>> Si la memoria sirve, el recuperado *bson irá a *stderr tan lo
>> quieres correr gusta
>>
>> ./*mdbundo /Camino/a/mi/*dbfilesdir *myfs archivos > *myfs.Archivos.*bson
>> ./*mdbundo /Camino/a/mi/*dbfilesdir *myfs *chunks > *myfs.*chunks.*bson
>>
>> Nos dejó sabe cómo va!
>>
>> *Asya
>>
>>
>> En *Tue, *Aug 25, 2015 en 9:03 AM,  <tob...@xxxxxxxxx> escribió:
>> >
>> > tendría que añadir que el pertinente *libbson los archivos con aquellos símbolos parecen para
>> > 
>> > ser correctamente instalado, *e.*g. Corriendo
>> >
>> >     *pkg-*config --*cflags --*libs --*libs *libbson-1.0
>> >
>> > regresos
>> >
>> >     -yo/*usr/local/incluye/*libbson-1.0  -*L/*usr/local/*lib -*lbson-1.0
>> >
>> > y dentro /*usr/local/incluye/*libbson-1.0, corriendo "*grep -*R *bson_libre ."
>> >
>> > Regresos:
>> >
>> >     ./*bson.*h: * Regresos: Un nuevamente *allocated cadena que tendría que ser liberado
>> > con *bson_libre().
>> > 
>> >     ./*bson-Memoria.*h:*void  *bson_Libre           (*void   **mem);
>> >
>> >
>> > En martes, agosto 25, 2015 en 11:51:18 SOY *UTC-4, tob...@xxxxxxxxx
>> > escribió:
>> >>
>> >> *Hi --
>> >>
>> >> Teniendo problema *compiling *mdb.     Soy en cometer
>> >> *e7*d018*cad20*f2#uno6*aa62*aea5*db088#uno8*e7*b05#uno190*d de *libbson.   Es que demasiado
>> >> temprano/tarde?
>> >>
>> >> Para conseguir la marca para trabajar nada, incluí el siguiendo línea en
>> >> todo
>> >> el *compilation pasos:
>> >>      -yo /*usr/local/incluye/*libbson-1.0
>> >>
>> >> Es este correcto?
>> >>
>> >> Aun así, aquí el *compilation error:
>> >>
>> >> /*tmp/*ccanf01*B.*o: En función `*db_*init':
>> >> /casa/*brejicz/*mdb/*mdb.*c:177: *undefined referencia a `*bson_*strdup_*printf'
>> >> 
>> >> /casa/*brejicz/*mdb/*mdb.*c:179: *undefined referencia a `*bson_libre'
>> >> /casa/*brejicz/*mdb/*mdb.*c:182: *undefined referencia a `*bson_libre'
>> >> /casa/*brejicz/*mdb/*mdb.*c:189: *undefined referencia a `*bson_*strdup_*printf'
>> >> 
>> >> /casa/*brejicz/*mdb/*mdb.*c:191: *undefined referencia a `*bson_libre'
>> >> /casa/*brejicz/*mdb/*mdb.*c:195: *undefined referencia a `*bson_*realloc'
>> >> /casa/*brejicz/*mdb/*mdb.*c:197: *undefined referencia a `*bson_libre'
>> >> /casa/*brejicz/*mdb/*mdb.*c:218: *undefined referencia a `*bson_libre'
>> >> /*tmp/*ccanf01*B.*o: En registro `de función_*bson':
>> >> /casa/*brejicz/*mdb/*mdb.*c:608: *undefined referencia a `*bson_*init_*static'
>> >> /*tmp/*ccvrjSqo.*o: En la función `principal':
>> >> /casa/*brejicz/*mdb/*mdbdump.*c:70: *undefined referencia a `*bson_tan_*json'
>> >> /casa/*brejicz/*mdb/*mdbdump.*c:73: *undefined referencia a `*bson_libre'
>> >> recoge2: *ld regresó 1 estado de salida
>> >> marca: *** [*mdbdump] Error 1
>> >>
>> >>
>> >> *Sorry para las cuestiones detalladas, pero cualquier ayuda sería grande.
>> >>
>> >> *Thanks!,
>> >> *T
>> >>
>> >> En martes, agosto 25, 2015 en 3:04:16 SOY *UTC-4, *Asya *Kamsky escribió:
>> >>
>> >> sugeriría 1.0 o *whichever la versión es *tagged alrededor del tiempo de último
>> >> cometer en *mdb *repo.
>> >>
>> >> Si consigues la versión incorrecta descubrirás durante complexión.
>> >>
>> >> *Asya
>> >>
>> >>
>> >> En *Mon, *Aug 24, 2015 en 6:43 PM,  <tob...@xxxxxxxxx> escribió:
>> >> > *Ok grande -- *thanks, probará esto.
>> >> >
>> >> > En plazos de conseguir la versión correcta de *libbson, adivino que
>> >> > consiguiendo un
>> >> > cometer de *https://*github.*com/*mongodb/*libbson/Comete/
>> >> >
>> >> >      
>> >> >
>> >> > aquí es la la manera para ir?   Cuáles cometen  sugieres que utilizo?
>> >> >
>> >> > *Thanks!
>> >> > *T
>> >> >
>> >> > En lunes, agosto 24, 2015 en 11:19:02 SOY *UTC-4, *Asya *Kamsky escribió:
>> >> >>
>> >> >> *Okay, bien, lo adivino puede no *hurt así que tendrías que intentar ver si
>> >> >> cualquier cosa
>> >> >> vertido fuera de con algo que lee los fines de archivos crudos arriba de ayudar
>> >> >> nada:
>> >> >> 
>> >> >>
>> >> >> Comprobar este *github *repo:
>> >> >>
>> >> >> *https://*github.*com/*chergert/*mdb
>> >> >>
>> >> >> Esto era mayoritariamente un ejercicio en verter fuera de encima-formato de disco (*nota* a cualquiera
>> >> >> leyendo, esto nunca había sido actualizado desde 2.4 así que sólo estoy
>> >> >> sugiriéndolo específicamente porque el OP está utilizando 2.4 - utilizando esto en versiones
>> >> >> más tardías
>> >> >> sin los cambios sencillamente darán errores).
>> >> >>
>> >> >> Para construir esto necesitarás *libbson versión de cuándo esto fue
>> >> >> escrito
>> >> >> que creo es 1.0.   El *executable quieres correr es llamado
>> >> >> *mdbundo,
>> >> >> aquí su uso.
>> >> >>
>> >> >> Uso: *mdbundo *DBPATH *DBNAME *COLNAME
>> >> >>
>> >> >> lo que hará es el paseo #por el DB archiva buscar registros
>> >> >> eliminados
>> >> >> y entonces intentar restaurarles.   No puedo recordar seguro pero lo
>> >> >> pienso
>> >> >> les enviará a *stdout tan puedes redirigir la producción de esto a un
>> >> >> archivo.
>> >> >> Aquel archivo tiene que haber *bson en ellos tan puedes utilizar *bsondump para verles
>> >> >> (cuál es *pointless para *chunks pero útil para "archivos") y puedes correr
>> >> >> *mongorestore en la producción para cargarlo a una colección nueva así que puedes entonces
>> >> >> figura fuera de si cualquier cosa en hay valioso.
>> >> >>
>> >> >> Evidentemente te corrido lo dos veces en los mismos archivos de DB, una vez para
>> >> >> *myfs.Archivos,
>> >> >> una vez
>> >> >> para *myfs.*chunks.
>> >> >>
>> >> >> Otra vez, dado la producción para validar no realmente #esperar para encontrar
>> >> >> cualquier cosa más que un *handful de registros en más, posiblemente *unusable
>> >> >> (desde entonces
>> >> >> para cada "archivo" cada *chunk la pertenencia a él tiene que ser presente de tener
>> >> >> el
>> >> >> lleno *usable archivo atrás.
>> >> >>
>> >> >> Suerte buena y dejarnos sabe lo que tú *uncover,
>> >> >> *Asya
>> >> >>
>> >> >>
>> >> >>
>> >> >>
>> >> >> En *Mon, *Aug 24, 2015 en 9:48 AM, <tob...@xxxxxxxxx> escribió:
>> >> >>>
>> >> >>> *Thanks para la respuesta -- *sorry para el retraso en respuesta (era fuera
>> >> >>> de ordenador).
>> >> >>> 
>> >> >>>
>> >> >>>
>> >> >>> En sábado, agosto 22, 2015 en 7:00:54 PM *UTC-4, *Asya *Kamsky
>> >> >>> escribió:
>> >> >>>>
>> >> >>>> Esto un poco está sorprendiendo.  Es allí otras colecciones en
>> >> >>>> este *database?
>> >> >>>
>> >> >>>
>> >> >>> Bien, no realmente,  aquí es el resultado de .Nombres_de colección() "":
>> >> >>>
>> >> >>> [*u'sistema.Índices', *u'*myfs.*chunks', *u'*myfs.Archivos']
>> >> >>>
>> >> >>>
>> >> >>>>
>> >> >>>> Esto es los archivos originales, bien?
>> >> >>>
>> >> >>>
>> >> >>> Sí.
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>>>
>> >> >>>>
>> >> >>>> Te es *seguro* hay archiva perder?  No significo por contar pero
>> >> >>>> por
>> >> >>>> contenido... Mencionaste eras aproximadamente para encontrar algunas cadenas que
>> >> >>>> emparejan
>> >> >>>> algo te esperado para encontrar en el DB pero  no?
>> >> >>>
>> >> >>>
>> >> >>> Sin duda 100% completamente seguro.    Puedo utilizar "*grep" en el *binary
>> >> >>> archivos
>> >> >>> de este *database y veo al menos partes del perdiendo registros,
>> >> >>> *e.*g.
>> >> >>> Algunas
>> >> >>> llaves que son único y es exactamente lo que el perdiendo los registros tendrían.
>> >> >>> Además, yo (y muchos otros) afortunadamente utilizaba esta colección
>> >> >>> como
>> >> >>> la fuente
>> >> >>> una cuadra *restful API para meses ... Al menos, hasta que este
>> >> >>> problema
>> >> >>> ocurrió ...
>> >> >>>
>> >> >>>>
>> >> >>>>
>> >> >>>> Cuántos *dbname.*ns Los Archivos son allí y lo que es sus medidas?
>> >> >>>
>> >> >>>
>> >> >>> En la colección original allí es 90 .*ns Archivos, la mayoría de cuáles
>> >> >>> son
>> >> >>> la medida
>> >> >>> estándar "2146435072", pero algunos del cual (el temprano unos naturalmente)
>> >> >>> es
>> >> >>> más pequeño.
>> >> >>>
>> >> >>> En la colección reparada allí es 35 .*ns Archivos, patrón de medida
>> >> >>> misma.
>> >> >>>
>> >> >>> *Thanks Otra vez para toda la ayuda --
>> >> >>>
>> >> >>> *T
>> >> >>>
>> >> >>>
>> >> >>>>
>> >> >>>>
>> >> >>>> *Asya
>> >> >>>>
>> >> >>>>
>> >> >>>> En viernes, agosto 21, 2015, <tob...@xxxxxxxxx> escribió:
>> >> >>>>>
>> >> >>>>> *Asya,
>> >> >>>>>
>> >> >>>>> *Thanks para la ayuda.  *Ok, aquí es las respuestas a vuestras cuestiones:
>> >> >>>>>
>> >> >>>>> 1) La agregación del *chunk la colección produce 279
>> >> >>>>> documentos.
>> >> >>>>> No recuerdo cuántos había antes de que los quiero fue
>> >> >>>>> perder.
>> >> >>>>> Aun así, adivino allí más que 279.  No sé exactamente cuántos
>> >> >>>>> 
>> >> >>>>> más,
>> >> >>>>> pero espero en el orden de #varios centenar al menos (pero otra vez,
>> >> >>>>> no soy
>> >> >>>>> seguro).    La colección  *verify, dentro que todo el *chunks en
>> >> >>>>> 
>> >> >>>>> el *chunks
>> >> >>>>> la colección es *accounted para por un en fila india en la colección
>> >> >>>>> de archivos.  Computé el número esperado de *chunks para cada archivo por dividir
>> >> >>>>> longitud por
>> >> >>>>> *chunk medida para cada registro de archivo; aquello numera exactamente *equals el
>> >> >>>>> contar de *chunks
>> >> >>>>> para aquel archivo cuando producido por la agregación en el *chunk
>> >> >>>>> colección.
>> >> >>>>>
>> >> >>>>> 2) *Ok, aquí el *info del validar órdenes:
>> >> >>>>>
>> >> >>>>> *db.Archivos.Valida(cierto) produjo:
>> >> >>>>>         "#unknown{^*ns" : "*mydb.*myfs.Archivos",
>> >> >>>>>         "*firstExtent" : "0:19000 *ns:*mydb.*myfs.Archivos",
>> >> >>>>>         "*lastExtent" : "7:67757000 *ns:*mydb.*myfs.Archivos",
>> >> >>>>>         "*extentCount" : 5,
>> >> >>>>>
>> >> >>>>>             <*snip "extensiones">
>> >> >>>>>
>> >> >>>>>         "*datasize" : 24075568,
>> >> >>>>>         "*nrecords" : 279,
>> >> >>>>>         "*lastExtentSize" : 12902400,
>> >> >>>>>         "*padding" : 1,
>> >> >>>>>
>> >> >>>>>             <*snip algunos otro material>
>> >> >>>>>
>> >> >>>>>         "*objectsFound" : 279,
>> >> >>>>>         "*invalidObjects" : 0,
>> >> >>>>>         "*bytesWithHeaders" : 24080032,
>> >> >>>>>         "*bytesWithoutHeaders" : 24075568,
>> >> >>>>>         "*deletedCount" : 3,
>> >> >>>>>         "*deletedSize" : 12008944,
>> >> >>>>>         "*nIndexes" : 3,
>> >> >>>>>         "*keysPerIndex" :
>> >> >>>>>                 "#unknown{^*mydb.*myfs.Archivos.$_*id_" : 279,
>> >> >>>>>                 "*mydb.*myfs.Archivos.$*filename_1_*uploadDate_-1" :
>> >> >>>>> 279,
>> >> >>>>>                 "*mydb.*myfs.Archivos.$*timestamp_1" : 279
>> >> >>>>>         },
>> >> >>>>>         "válido" : cierto,
>> >> >>>>>         "errores" : [ ],
>> >> >>>>>         "*ok" : 1
>> >> >>>>> }
>> >> >>>>>
>> >> >>>>> *db.*chunks.Valida(cierto) produjo:
>> >> >>>>>
>> >> >>>>> *db.*chunks.Valida(cierto)
>> >> >>>>>         "#unknown{^*ns" : "*mydb.*myfs.*chunks",
>> >> >>>>>         "*firstExtent" : "0:5000 *ns:*mydb.*myfs.*chunks",
>> >> >>>>>         "*lastExtent" : "28:2000 *ns:*mydb.*myfs.*chunks",
>> >> >>>>>         "*extentCount" : 43,
>> >> >>>>>
>> >> >>>>>             <*snip "extensiones">
>> >> >>>>>
>> >> >>>>>         "*datasize" : 49854830688,
>> >> >>>>>         "*nrecords" : 190427,
>> >> >>>>>         "*lastExtentSize" : 2146426864,
>> >> >>>>>         "*padding" : 1,
>> >> >>>>>         "*firstExtentDetails" :
>> >> >>>>>                 "#unknown{^*loc" : "0:5000",
>> >> >>>>>                 "*xnext" : "0:162000",
>> >> >>>>>                 "*xprev" : "*null",
>> >> >>>>>                 "*nsdiag" : "*mydb.*myfs.*chunks",
>> >> >>>>>                 "medida" : 8192,
>> >> >>>>>                 "*firstRecord" : "0:50*b0",
>> >> >>>>>                 "*lastRecord" : "0:6*f28"
>> >> >>>>>         },
>> >> >>>>>         "*lastExtentDetails" :
>> >> >>>>>                 "#unknown{^*loc" : "28:2000",
>> >> >>>>>                 "*xnext" : "*null",
>> >> >>>>>                 "*xprev" : "27:2000",
>> >> >>>>>                 "*nsdiag" : "*mydb.*myfs.*chunks",
>> >> >>>>>                 "medida" : 2146426864,
>> >> >>>>>                 "*firstRecord" : "28:20*b0",
>> >> >>>>>                 "*lastRecord" : "28:189*c20*b0"
>> >> >>>>>         },
>> >> >>>>>         "*objectsFound" : 190427,
>> >> >>>>>         "*invalidObjects" : 0,
>> >> >>>>>         "*bytesWithHeaders" : 49857877520,
>> >> >>>>>         "*bytesWithoutHeaders" : 49854830688,
>> >> >>>>>         "*deletedCount" : 20,
>> >> >>>>>         "*deletedSize" : 1733626640,
>> >> >>>>>         "*nIndexes" : 2,
>> >> >>>>>         "*keysPerIndex" :
>> >> >>>>>                 "#unknown{^*mydb.*myfs.*chunks.$_*id_" : 190427,
>> >> >>>>>                 "*mydb.*myfs.*chunks.$Archivos_*id_1_*n_1" : 190427
>> >> >>>>>         },
>> >> >>>>>         "válido" : cierto,
>> >> >>>>>         "errores" : [ ],
>> >> >>>>>         "*ok" : 1
>> >> >>>>> }
>> >> >>>>>
>> >> >>>>> no soy seguro si "*deletedCount" de 3 "" en la colección
>> >> >>>>> de archivos
>> >> >>>>> haría arriba para todos los registros eliminados que espero ... Si lo
>> >> >>>>> interpreto correctamente.   También no hace arriba para el enorme *compaction
>> >> >>>>> que
>> >> >>>>> ocurrió en la versión reparada de la colección (cuál es mucho
>> >> >>>>> más pequeño en
>> >> >>>>> disco, pero todavía contiene 279 registros).
>> >> >>>>>
>> >> >>>>> Cualesquier pensamientos en qué para hacer próximo?   *Thanks Otra vez para vuestra ayuda!
>> >> >>>>>
>> >> >>>>> *T
>> >> >>>>>
>> >> >>>>>
>> >> >>>>> En viernes, agosto 21, 2015 en 2:02:27 PM *UTC-4, *Asya *Kamsky
>> >> >>>>> escribió:
>> >> >>>>>>
>> >> >>>>>> estoy suponiendo que empiezas arriba de *mongod con los archivos originales
>> >> >>>>>> (unos
>> >> >>>>>> no reparaste pero salvado cuándo tú primero *noticed que
>> >> >>>>>> algo
>> >> >>>>>> era
>> >> >>>>>> *amiss):
>> >> >>>>>>
>> >> >>>>>> *Okay, primero, sería bueno de saber qué colecciones faltan
>> >> >>>>>> 
>> >> >>>>>> documentos:
>> >> >>>>>>
>> >> >>>>>> mencionaste habiendo 279 documentos en colección de archivos - cuántos
>> >> >>>>>> 
>> >> >>>>>> son en *chunks colección?    Cruzan-*verify"?
>> >> >>>>>>
>> >> >>>>>> Puedes ver lo que cada cual ha aquí:
>> >> >>>>>> *http://*docs.*mongodb.*org/Referencia/manual/*gridfs/
>> >> >>>>>>
>> >> >>>>>> Así que puedes decir para cada cual archiva cuántos *chunk los documentos allí
>> >> >>>>>> tendrían que
>> >> >>>>>> ser
>> >> >>>>>> y para cada *chunk documento puedes decir cuáles lo archivan pertenece
>> >> >>>>>> a y
>> >> >>>>>> en
>> >> >>>>>> lo que orden.
>> >> >>>>>>
>> >> >>>>>> Algunos maneras posibles a *verify esto:
>> >> >>>>>> *db.*fs.*chunks.*aggregate($#Nom:#nom_*id:1, *num:1}},$#nom:_#unknown{^*id:"$archivos_*id",cuenta:$#nom:1},
>> >> >>>>>> *chunks:$#nom:"$*num"}}}).
>> >> >>>>>>
>> >> >>>>>> Si tienes 279 archivos en vuestro *fs.Colección de archivos, entonces tendrías que
>> >> >>>>>> conseguir
>> >> >>>>>> atrás 279 documentos de esta agregación, cada correspondiente a 
>> >> >>>>>> 
>> >> >>>>>> "un archivo" en
>> >> >>>>>> *gridFS.
>> >> >>>>>>
>> >> >>>>>> Cuántos archivos  esperas tener?
>> >> >>>>>>
>> >> >>>>>> Puedes ahora corrido esta orden (muy lento):
>> >> >>>>>>
>> >> >>>>>> *db.*fs.Archivos.Valida(cierto)
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>> y *db.*fs.*chunks.Valida(cierto)
>> >> >>>>>> 
>> >> >>>>>>
>> >> >>>>>> Esto dará la producción que incluye los campos descritos aquí:
>> >> >>>>>>
>> >> >>>>>> *http://*docs.*mongodb.*org/Orden/de referencia/manual/valida/#producción
>> >> >>>>>>
>> >> >>>>>> eres interesado en los registros eliminados
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>> (*http://*docs.*mongodb.*org/Orden/de referencia/manual/valida/#valida.*deletedCount
>> >> >>>>>> Y el campo próximo, *deletedSize).
>> >> >>>>>> Básicamente estos son los registros que pueden ser potencialmente
>> >> >>>>>> recuperado -
>> >> >>>>>> puede proporcionas producción a por encima de experimentos así que no malgastamos
>> >> >>>>>> el tiempo
>> >> >>>>>> intentando recuperar dato aquello no es todavía allí?
>> >> >>>>>>
>> >> >>>>>> *Asya
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>> En *Fri, *Aug 21, 2015 en 11:33 AM, <tob...@xxxxxxxxx> escribió:
>> >> >>>>>>>
>> >> >>>>>>> Grande *thanks tanto para la ayuda!
>> >> >>>>>>> *T
>> >> >>>>>>>
>> >> >>>>>>> En viernes, agosto 21, 2015 en 9:49:36 SOY *UTC-4, *Asya *Kamsky
>> >> >>>>>>> escribió:
>> >> >>>>>>>>
>> >> >>>>>>>> Algunos de los registros son en una lista libre y no ha sido
>> >> >>>>>>>> *overwritten
>> >> >>>>>>>> todavía.  Puede ser posible de conseguirles atrás.
>> >> >>>>>>>>
>> >> >>>>>>>> Desde entonces estás utilizando versión 2.4.*x Creo que hay código
>> >> >>>>>>>> que
>> >> >>>>>>>> andará #por los archivos de DB y vertedero fuera de cada documento encuentra allí.
>> >> >>>>>>>>
>> >> >>>>>>>> Dejado me cava alrededor puesto que él - evidentemente hay (un) ninguna garantía
>> >> >>>>>>>> que
>> >> >>>>>>>> esto restaurará todo de ellos y (*b)  dices que estos eran
>> >> >>>>>>>> *GridFS *chunks -
>> >> >>>>>>>> si tan entonces no conseguirás atrás un archivo válido a no ser que todo *chunks
>> >> >>>>>>>> de aquel
>> >> >>>>>>>> archivo
>> >> >>>>>>>> es restaurado (junto con registro de archivos correspondientes).
>> >> >>>>>>>>
>> >> >>>>>>>> *Asya
>> >> >>>>>>>>
>> >> >>>>>>>>
>> >> >>>>>>>> En miércoles, agosto 19, 2015, <tob...@xxxxxxxxx> escribió:
>> >> >>>>>>>>>
>> >> >>>>>>>>> *Thanks para la sugerencia.   El *logs no aparece para mostrar
>> >> >>>>>>>>> cualesquier
>> >> >>>>>>>>> traslados en aquel *database desde *Oct. 2014.    Al menos tan lejos
>> >> >>>>>>>>> cuando
>> >> >>>>>>>>> estos *logs
>> >> >>>>>>>>> va, no mira a a mí le gusta un sacar fue hecho.
>> >> >>>>>>>>>
>> >> >>>>>>>>> Cualesquier sugerencias para qué para hacer ahora?   Sin duda puedo ver
>> >> >>>>>>>>> rastros
>> >> >>>>>>>>> del dato quiero conseguir en en el (*unmodified) *database
>> >> >>>>>>>>> archivos.
>> >> >>>>>>>>> Es allí
>> >> >>>>>>>>> ninguna manera de conseguir en el dato en ellos?
>> >> >>>>>>>>>
>> >> >>>>>>>>> *Thanks!
>> >> >>>>>>>>> *T
>> >> >>>>>>>>>
>> >> >>>>>>>>>
>> >> >>>>>>>>> En miércoles, agosto 19, 2015 en 2:53:46 PM *UTC-4, *Asya
>> >> >>>>>>>>> *Kamsky
>> >> >>>>>>>>> escribió:
>> >> >>>>>>>>>>
>> >> >>>>>>>>>> El único *CRUD operaciones que serían *logged sería los que
>> >> >>>>>>>>>> 
>> >> >>>>>>>>>> toman más largos que 100*ms.
>> >> >>>>>>>>>>
>> >> >>>>>>>>>> Podría mirar algo así:
>> >> >>>>>>>>>>
>> >> >>>>>>>>>> 2015-07-14*T12:10:34.559-0500 ESCRIBO    [*conn4822] saca
>> >> >>>>>>>>>> *blackbox.*bigcoll Consulta: _#unknown{^*id: $#unknown{^*gte: 10000.0 } }
>> >> >>>>>>>>>> *ndeleted:90000
>> >> >>>>>>>>>> *keyUpdates:0 *writeConflicts:0 *numYields:12330 cerraduras:
>> >> >>>>>>>>>> #adj:
>> >> >>>>>>>>>> #unknown{^*acquireCount: #unknown{^*r: 12331, *w: 12331 } }, *Database:
>> >> >>>>>>>>>> #unknown{^*acquireCount: #unknown{^*w: 12331
>> >> >>>>>>>>>> } }, Colección: #unknown{^*acquireCount: #unknown{^*w: 12331 } } } 249168*ms
>> >> >>>>>>>>>>
>> >> >>>>>>>>>> Naturalmente esto era una masa enorme saca de *giant *docs que es
>> >> >>>>>>>>>> por qué
>> >> >>>>>>>>>> tomó tan mucho tiempo.   Si el número de los documentos eliminados era
>> >> >>>>>>>>>> pequeño
>> >> >>>>>>>>>> (como si
>> >> >>>>>>>>>> fueron eliminados uno a la vez en vez de *via orden
>> >> >>>>>>>>>> sola)
>> >> >>>>>>>>>> entonces es
>> >> >>>>>>>>>> mucho menos probablemente habrían tomado más largos que 100*ms.
>> >> >>>>>>>>>>
>> >> >>>>>>>>>> Sugeriría buscar #por el *logs con orden
>> >> >>>>>>>>>> equivalente
>> >> >>>>>>>>>> a:
>> >> >>>>>>>>>>
>> >> >>>>>>>>>> *grep 'sacar <*dbname>' *.*log.*
>> >> >>>>>>>>>>
>> >> >>>>>>>>>> Dónde reemplazarías <*dbname> con el nombre de vuestro
>> >> >>>>>>>>>> DB
>> >> >>>>>>>>>> real
>> >> >>>>>>>>>> y reemplazar *.*log.* Con camino a vuestro *log archivos.
>> >> >>>>>>>>>>
>> >> >>>>>>>>>> *Asya
>> >> >>>>>>>>>>
>> >> >>>>>>>>>>
>> >> >>>>>>>>>> En *Tue, *Aug 18, 2015 en 1:56 PM, <tob...@xxxxxxxxx> escribió:
>> >> >>>>>>>>>>>
>> >> >>>>>>>>>>>
>> >> >>>>>>>>>>>
>> >> >>>>>>>>>>> En martes, agosto 18, 2015 en 1:53:13 PM *UTC-4, *Asya
>> >> >>>>>>>>>>> *Kamsky
>> >> >>>>>>>>>>> escribió:
>> >> >>>>>>>>>>>>
>> >> >>>>>>>>>>>>  tienes el *logs de alrededor del tiempo el *database
>> >> >>>>>>>>>>>> chocó?
>> >> >>>>>>>>>>>> Cuando era que relativo a cuándo tú *noticed los registros que
>> >> >>>>>>>>>>>> pierden?
>> >> >>>>>>>>>>>>
>> >> >>>>>>>>>>>
>> >> >>>>>>>>>>> Sí tengo todo el *logs volviendo para un año.   Yo *noticed
>> >> >>>>>>>>>>> los
>> >> >>>>>>>>>>> registros que pierden el día que escribí la cuestión de ayuda
>> >> >>>>>>>>>>> originalmente (hace
>> >> >>>>>>>>>>> varios días).   El accidente pasó hace varias semanas.
>> >> >>>>>>>>>>>
>> >> >>>>>>>>>>>
>> >> >>>>>>>>>>>> Desde ti hubo *journaling encima, y eras capaz de retomar
>> >> >>>>>>>>>>>> *mongod
>> >> >>>>>>>>>>>> sin cualesquier errores, gobernaría fuera del accidente que *causa*
>> >> >>>>>>>>>>>> registros de desaparecer,
>> >> >>>>>>>>>>>> el cual te deja con la posibilidad que fueron eliminados.
>> >> >>>>>>>>>>>> Si fueron eliminados y no tienes cualquier *backups de *antes
>> >> >>>>>>>>>>>> de que* fueron
>> >> >>>>>>>>>>>> eliminados, entonces allí ha no mucho que pueden ser hecho para recuperar
>> >> >>>>>>>>>>>> el dato - seguro,
>> >> >>>>>>>>>>>> es posible de escribir código a literalmente mirada #por la lista
>> >> >>>>>>>>>>>> eliminada, pero
>> >> >>>>>>>>>>>> si cualquier dato fue insertado después del elimina entonces aquel espacio
>> >> >>>>>>>>>>>> sería *reused
>> >> >>>>>>>>>>>> y el dato viejo sería *overwritten *forever.   :(
>> >> >>>>>>>>>>>
>> >> >>>>>>>>>>>
>> >> >>>>>>>>>>> El *backups no es de antes de que desafortunadamente.   Pero
>> >> >>>>>>>>>>>  no
>> >> >>>>>>>>>>> allí ser algún registro en el *logs si había un *deleation?
>> >> >>>>>>>>>>>
>> >> >>>>>>>>>>> No hemos escrito cualesquier registros desde el elimina.   (Los
>> >> >>>>>>>>>>> archivos
>> >> >>>>>>>>>>> no han cambiado para meses.)
>> >> >>>>>>>>>>>
>> >> >>>>>>>>>>>
>> >> >>>>>>>>>>>>
>> >> >>>>>>>>>>>>
>> >> >>>>>>>>>>>> *Asya
>> >> >>>>>>>>>>>>
>> >> >>>>>>>>>>>>
>> >> >>>>>>>>>>>> En *Tue, *Aug 18, 2015 en 12:54 PM, <tob...@xxxxxxxxx>
>> >> >>>>>>>>>>>> escribió:
>> >> >>>>>>>>>>>>>
>> >> >>>>>>>>>>>>> *Hi *Ankit --
>> >> >>>>>>>>>>>>>
>> >> >>>>>>>>>>>>> *Thanks tanto para vuestra respuesta -- es realmente apreciado.
>> >> >>>>>>>>>>>>>
>> >> >>>>>>>>>>>>> Aquí es algunos contesta a vuestras cuestiones:
>> >> >>>>>>>>>>>>>
>> >> >>>>>>>>>>>>> 1) es un *standalone *deployment, con ningún *replica conjunto (estoy
>> >> >>>>>>>>>>>>> utilizando
>> >> >>>>>>>>>>>>> *journaling aun así)
>> >> >>>>>>>>>>>>>
>> >> >>>>>>>>>>>>> 2) la versión es 2.4.6
>> >> >>>>>>>>>>>>>
>> >> >>>>>>>>>>>>> 3) Aquí es la producción de explicar (hecho #por *pymongo):#unknown{^*u'*allPlans': [#unknown{^*u'*cursor': *u'*BasicCursor',
>> >> >>>>>>>>>>>>>    *u'*indexBounds': {},
>> >> >>>>>>>>>>>>>    *u'*n': 0,
>> >> >>>>>>>>>>>>>    *u'*nscanned': 279,
>> >> >>>>>>>>>>>>>    *u'*nscannedObjects': 279}],
>> >> >>>>>>>>>>>>>  *u'*cursor': *u'*BasicCursor',
>> >> >>>>>>>>>>>>>  *u'*indexBounds': {},
>> >> >>>>>>>>>>>>>  *u'*indexOnly': Falso,
>> >> >>>>>>>>>>>>>  *u'*isMultiKey': Falso,
>> >> >>>>>>>>>>>>>  *u'*millis': 176,
>> >> >>>>>>>>>>>>>  *u'*n': 0,
>> >> >>>>>>>>>>>>>  *u'*nChunkSkips': 0,
>> >> >>>>>>>>>>>>>  *u'*nYields': 1,
>> >> >>>>>>>>>>>>>  *u'*nscanned': 279,
>> >> >>>>>>>>>>>>>  *u'*nscannedAllPlans': 279,
>> >> >>>>>>>>>>>>>  *u'*nscannedObjects': 279,
>> >> >>>>>>>>>>>>>  *u'*nscannedObjectsAllPlans': 279,
>> >> >>>>>>>>>>>>>  *u'*scanAndOrder': Falso,
>> >> >>>>>>>>>>>>>  *u'*server': myserver@myport}
>> >> >>>>>>>>>>>>>
>> >> >>>>>>>>>>>>> (pienso está diciendo 279 aquí porque que es el número de los
>> >> >>>>>>>>>>>>> registros mostrados -- no 250, gusta dije antes, que era una
>> >> >>>>>>>>>>>>> equivocación.)
>> >> >>>>>>>>>>>>>
>> >> >>>>>>>>>>>>> 4) Aquí es la información de índice (también #por *pymongo):#unknown{^*u'_*id_':#unknown{^*u'llave': [(*u'_*id', 1)], *u'*v': 1},
>> >> >>>>>>>>>>>>>  *u'*filename_1_*uploadDate_-1':#unknown{^*u'llave': [(*u'*filename', 1),
>> >> >>>>>>>>>>>>> (*u'*uploadDate', -1)],
>> >> >>>>>>>>>>>>>   *u'*v': 1},
>> >> >>>>>>>>>>>>>  *u'*timestamp_1':#unknown{^*u'llave': [(*u'*timestamp', 1)], *u'*v': 1}}
>> >> >>>>>>>>>>>>>
>> >> >>>>>>>>>>>>> Cuando puedes ver, esto es de un ".Colección" de archivos utilizada
>> >> >>>>>>>>>>>>> para
>> >> >>>>>>>>>>>>> dirigir un *gridfs caso.
>> >> >>>>>>>>>>>>>
>> >> >>>>>>>>>>>>>
>> >> >>>>>>>>>>>>> También pienso que tengo alguna información adicional.   Copié
>> >> >>>>>>>>>>>>> el
>> >> >>>>>>>>>>>>> *database archivos a una ubicación nueva y corrió una reparación llena en
>> >> >>>>>>>>>>>>> el entero
>> >> >>>>>>>>>>>>> *database.   EN la versión reparada nueva, había muchos
>> >> >>>>>>>>>>>>> menos archivos .*ns
>> >> >>>>>>>>>>>>> Archivos.   Cuando es tan si algunos registros fueron sacados utilizando *e.*g.
>> >> >>>>>>>>>>>>> *gridfs.*GridFS.Saca ... Tan en el nuevo (compactado)
>> >> >>>>>>>>>>>>> versión
>> >> >>>>>>>>>>>>> el dato era
>> >> >>>>>>>>>>>>> finalmente eliminado.  Quizás su *somehow posible que los
>> >> >>>>>>>>>>>>> registros fueron sacados
>> >> >>>>>>>>>>>>> del *gridfs colección por alguien corriendo un sacar
>> >> >>>>>>>>>>>>> operación?   No veo cualquier evidencia de tal una operación en el *logs pero
>> >> >>>>>>>>>>>>> quizás es posible?
>> >> >>>>>>>>>>>>> He mirado cuidadosamente en el *logs y no estoy viendo
>> >> >>>>>>>>>>>>> cualquier cosa
>> >> >>>>>>>>>>>>> obvio, pero
>> >> >>>>>>>>>>>>> quizás *dont' saber qué para buscar.
>> >> >>>>>>>>>>>>>
>> >> >>>>>>>>>>>>> Naturalmente, que adivino no modifica el *underlying
>> >> >>>>>>>>>>>>> archivos
>> >> >>>>>>>>>>>>> en
>> >> >>>>>>>>>>>>> la colección original donde el (hipotético) saca
>> >> >>>>>>>>>>>>> habría sido corrido.
>> >> >>>>>>>>>>>>> Si esto de hecho el caso, es allí alguna manera de conseguir el
>> >> >>>>>>>>>>>>> dato
>> >> >>>>>>>>>>>>> atrás?  Después de todo,
>> >> >>>>>>>>>>>>> tengo el original .*ns Archivos.
>> >> >>>>>>>>>>>>>
>> >> >>>>>>>>>>>>> *Thanks!
>> >> >>>>>>>>>>>>> *Tob
>> >> >>>>>>>>>>>>>
>> >> >>>>>>>>>>>>>
>> >> >>>>>>>>>>>>>
>> >> >>>>>>>>>>>>>
>> >> >>>>>>>>>>>>> En lunes, agosto 17, 2015 en 12:47:08 PM *UTC-4, *Ankit
>> >> >>>>>>>>>>>>> *Kakkar
>> >> >>>>>>>>>>>>> escribió:
>> >> >>>>>>>>>>>>>>
>> >> >>>>>>>>>>>>>> Hola *Tobjan,
>> >> >>>>>>>>>>>>>>
>> >> >>>>>>>>>>>>>> *Thanks para lograr fuera a #prpers. Para asistirnos en
>> >> >>>>>>>>>>>>>> la investigación de este caso, complacer proporcionarnos con seguir
>> >> >>>>>>>>>>>>>> información:
>> >> >>>>>>>>>>>>>>
>> >> >>>>>>>>>>>>>> 1) Describe vuestro *MongoDB *deployment (*standalone, *replica
>> >> >>>>>>>>>>>>>> conjunto,
>> >> >>>>>>>>>>>>>> o *sharded grupo)
>> >> >>>>>>>>>>>>>> 2) Qué versión de *MongoDB te es corriendo? (Lo puedes
>> >> >>>>>>>>>>>>>> comprobar con *db.Versión() en el *mongo pelar)
>> >> >>>>>>>>>>>>>> 3) Cuál consulta  utilizas para encontrar o contar los
>> >> >>>>>>>>>>>>>> documentos?
>> >> >>>>>>>>>>>>>> Te
>> >> >>>>>>>>>>>>>> #poder complacer corrido que consulta con explicar() y enviarnos la producción?
>> >> >>>>>>>>>>>>>> 
>> >> >>>>>>>>>>>>>> 4) Producción de *db.Colección.*getIndexes() Para
>> >> >>>>>>>>>>>>>> la colección
>> >> >>>>>>>>>>>>>> donde los documentos aparecen para faltar.
>> >> >>>>>>>>>>>>>>
>> >> >>>>>>>>>>>>>> Consideraciones,
>> >> >>>>>>>>>>>>>> *ankit
>> >> >>>>>>>>>>>>>>
>> >> >>>>>>>>>>>>>> En lunes, agosto 17, 2015 en 12:56:20 PM *UTC+5:30,
>> >> >>>>>>>>>>>>>> Chris
>> >> >>>>>>>>>>>>>> *De
>> >> >>>>>>>>>>>>>> *Bruyne escribió:
>> >> >>>>>>>>>>>>>>>
>> >> >>>>>>>>>>>>>>> Puede das algunos más *info?
>> >> >>>>>>>>>>>>>>>
>> >> >>>>>>>>>>>>>>> Como qué consulta te es haciendo para encontrar los documentos,
>> >> >>>>>>>>>>>>>>> qué
>> >> >>>>>>>>>>>>>>> es
>> >> >>>>>>>>>>>>>>> la estructura del *docs, es allí índices en esta
>> >> >>>>>>>>>>>>>>> colección?
>> >> >>>>>>>>>>>>>>>
>> >> >>>>>>>>>>>>>>>
>> >> >>>>>>>>>>>>>>> En domingo, agosto 16, 2015 en 12:33:16 SOY *UTC+2,
>> >> >>>>>>>>>>>>>>> tob...@xxxxxxxxx escribió:
>> >> >>>>>>>>>>>>>>>>
>> >> >>>>>>>>>>>>>>>> tengo un *mongodb *database conteniendo una colección que
>> >> >>>>>>>>>>>>>>>> ha
>> >> >>>>>>>>>>>>>>>> quedado *unchanged para un periodo de tiempo -- el
>> >> >>>>>>>>>>>>>>>> *underlying
>> >> >>>>>>>>>>>>>>>> .*ns Y .0, .1 *etc
>> >> >>>>>>>>>>>>>>>> los archivos no han sido modificados para meses. Arriba hasta que hace
>> >> >>>>>>>>>>>>>>>> 
>> >> >>>>>>>>>>>>>>>> unas cuantas semanas, no tuve ningún
>> >> >>>>>>>>>>>>>>>> problema leyendo registros de la colección. había
>> >> >>>>>>>>>>>>>>>> millares de registros
>> >> >>>>>>>>>>>>>>>> en la colección.
>> >> >>>>>>>>>>>>>>>>
>> >> >>>>>>>>>>>>>>>> Aun así, hoy cuándo intenté para leer los registros,
>> >> >>>>>>>>>>>>>>>> muchos
>> >> >>>>>>>>>>>>>>>> de ellos
>> >> >>>>>>>>>>>>>>>> aparecido perdiendo -- *e.*g. Registros que esperé
>> >> >>>>>>>>>>>>>>>> ser no hubo,
>> >> >>>>>>>>>>>>>>>> a pesar de que algunos de los registros eran disponibles. Ahora,
>> >> >>>>>>>>>>>>>>>> allí
>> >> >>>>>>>>>>>>>>>> aparecer para ser sólo
>> >> >>>>>>>>>>>>>>>> 250 registros. =
>> >> >>>>>>>>>>>>>>>>
>> >> >>>>>>>>>>>>>>>> Copié el *database archivos y hizo una reparación del
>> >> >>>>>>>>>>>>>>>> (copiado)
>> >> >>>>>>>>>>
>> >>
>> >> ...
>> >
>> > --
>> > Recibiste este mensaje porque eres *subscribed al *Google
>> > Grupos
>> > "*mongodb-grupo"
>> > de usuario.
>> >
>> > Para otro *MongoDB opciones de apoyo técnico, ve:
>> > *http://www.mongodb.org/sobre/apoyo/.
>> > ---
>> > Recibiste este mensaje porque eres *subscribed al *Google
>> > Grupos
>> > "*mongodb-grupo" de usuario.
>> > A *unsubscribe de este grupo y la parón que recibe *emails de él, enviar
>> > un
>> > *email a *mongodb-user...@xxxxxxxxxxxxxxxx.
>> > A correo a este grupo, envía *email a mongod...@xxxxxxxxxxxxxxxx.
>> > Visita este grupo en *http://grupos.*google.*com/Grupo/*mongodb-usuario.
>> > Para ver esta discusión en la visita de web
>> >
>> > *https://grupos.*google.*com/*d/*msgid/*mongodb-Usuario/*ea02*d25*c-7*c17-47*f1-#uno443-*c80956*fbc5*cd%40*googlegroups.*com.
>> >
>> > Para más opciones, visita *https://grupos.*google.*com/*d/*optout.
>
> --
> Recibiste este mensaje porque eres *subscribed al *Google Grupos
> "*mongodb-grupo"
> de usuario.
>
> Para otro *MongoDB opciones de apoyo técnico, ve:
> *http://www.mongodb.org/sobre/apoyo/.
> ---
> Recibiste este mensaje porque eres *subscribed al *Google Grupos
> "*mongodb-grupo" de usuario.
> A *unsubscribe de este grupo y la parón que recibe *emails de él, enviar un
> *email a *mongodb-usuario+unsubscribe@xxxxxxxxxxxxxxxx.
> A correo a este grupo, envía *email a *mongodb-user@xxxxxxxxxxxxxxxx.
> Visita este grupo en *http://grupos.*google.*com/Grupo/*mongodb-usuario.
> Para ver esta discusión en la visita de web
> *https://grupos.*google.*com/*d/*msgid/*mongodb-Usuario/6*f2898*d9-6*d00-4*ff7-*af8*e-4*d4*abab4*b523%40*googlegroups.*com.
>
> Para más opciones, visita *https://grupos.*google.*com/*d/*optout.

-- 
Recibiste este mensaje porque eres *subscribed al *Google Grupos "*mongodb-grupo"
de usuario.

Para otro *MongoDB opciones de apoyo técnico, ve: *http://www.mongodb.org/sobre/apoyo/.
--- 
Recibiste este mensaje porque eres *subscribed al *Google Grupos "*mongodb-grupo" de usuario.
A *unsubscribe de este grupo y la parón que recibe *emails de él, enviar un *email a *mongodb-usuario+unsubscribe@xxxxxxxxxxxxxxxx.
A correo a este grupo, envía *email a *mongodb-user@xxxxxxxxxxxxxxxx.
Visita este grupo en *http://grupos.*google.*com/Grupo/*mongodb-usuario.
Para ver esta discusión en la visita de web *https://grupos.*google.*com/*d/*msgid/*mongodb-Usuario/*CAOe6*dJDTQ7*N7*sXLnHv8*HqcCKkxZ%3*DwMBhMVnM34#Uno%3*DMfzS9*bLYKg%40correo.*gmail.*com.
Para más opciones, visita *https://grupos.*google.*com/*d/*optout.

When I build mdbundo my compile looks like this, right below it is yours:

cc -o mdbundo -Wall -Werror -O0 -ggdb -I/usr/local/include/libbson-1.0
 -L/usr/local/lib -lbson-1.0   mdb.c mdb.h mdbundo.c

cc -o mdbundo -Wall -Werror -O0 -ggdb -I/usr/local/include/libbson-1.0
 -L/usr/local/lib -lbson-1.0   mdb.c mdb.h mdbundo.c

So that looks the same which means some sort of environment may not
have been set up the same way.
I assume you build libbson without errors, and exported PKG_CONFIG_PATH?

Is it possible you have a different version of libson installed from
your previous attempts?

Asya


On Mon, Aug 31, 2015 at 6:32 AM,  <tobangi@xxxxxxxxx> wrote:
> Thanks -- helpful.    Still getting stuck on the compilation of mdbundo.
>
> I follow your exact steps (including getting libbson from the source you
> specified).   But at "make mdbundo" I get a compilation error (see below).
> Any thoughts?
>
> Thanks!
> T
>
> cc -o mdbundo -Wall -Werror -O0 -ggdb -I/usr/local/include/libbson-1.0
> -L/usr/local/lib -lbson-1.0   mdb.c mdb.h mdbundo.c
> /tmp/cc1rVoR4.o: In function `db_init':
> /home/brejicz/src/mdb-master/mdb.c:177: undefined reference to
> `bson_strdup_printf'
> /home/brejicz/src/mdb-master/mdb.c:179: undefined reference to `bson_free'
> /home/brejicz/src/mdb-master/mdb.c:182: undefined reference to `bson_free'
> /home/brejicz/src/mdb-master/mdb.c:189: undefined reference to
> `bson_strdup_printf'
> /home/brejicz/src/mdb-master/mdb.c:191: undefined reference to `bson_free'
> /home/brejicz/src/mdb-master/mdb.c:195: undefined reference to
> `bson_realloc'
> /home/brejicz/src/mdb-master/mdb.c:197: undefined reference to `bson_free'
> /home/brejicz/src/mdb-master/mdb.c:218: undefined reference to `bson_free'
> /tmp/cc1rVoR4.o: In function `record_bson':
> /home/brejicz/src/mdb-master/mdb.c:608: undefined reference to
> `bson_init_static'
> /tmp/ccvogrxm.o: In function `fixup_bson':
> /home/brejicz/src/mdb-master/mdbundo.c:28: undefined reference to
> `bson_init_static'
> /home/brejicz/src/mdb-master/mdbundo.c:32: undefined reference to
> `bson_iter_init'
> /home/brejicz/src/mdb-master/mdbundo.c:36: undefined reference to
> `bson_iter_next'
> /tmp/ccvogrxm.o: In function `get_bson_at_loc':
> /home/brejicz/src/mdb-master/mdbundo.c:85: undefined reference to
> `bson_init_static'
> /home/brejicz/src/mdb-master/mdbundo.c:89: undefined reference to
> `bson_validate'
> /tmp/ccvogrxm.o: In function `mdbundo':
> /home/brejicz/src/mdb-master/mdbundo.c:123: undefined reference to
> `bson_get_data'
> collect2: ld returned 1 exit status
> make: *** [mdbundo] Error 1
>
>
> On Sunday, August 30, 2015 at 9:02:35 PM UTC-4, Asya Kamsky wrote:
>>
>> Here is what I know works.
>>
>> Download 1.0.0 of libbson.    You can get it from this url:
>> https://github.com/mongodb/libbson/archive/1.0.0.zip
>> Unzip it, cd into it, assuming you have all dependencies installed
>> (compiler, etc) run
>> ./autoconf.sh
>> make
>> sudo make install
>>
>> Then download mdb the same way from
>> https://github.com/chergert/mdb/archive/master.zip
>> unzip and cd into it
>> export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig
>> make mdbundo
>>
>> Now you should have an executable called ./mdbundo
>> You may need to do this to run it:
>> export LD_LIBRARY_PATH=/usr/local/lib
>>
>> ./mdbundo --help
>> usage: mdbundo DBPATH DBNAME COLNAME
>>
>> DBPATH is the path to the original files, DBNAME is myfs then you
>> would do this twice once for files and once for chunks.
>> If memory serves, the recovered bson will go to stderr so you want to
>> run it like
>>
>> ./mdbundo /path/to/my/dbfilesdir myfs files > myfs.files.bson
>> ./mdbundo /path/to/my/dbfilesdir myfs chunks > myfs.chunks.bson
>>
>> Let us know how it goes!
>>
>> Asya
>>
>>
>> On Tue, Aug 25, 2015 at 9:03 AM,  <tob...@xxxxxxxxx> wrote:
>> >
>> > I should add that the relevant libbson files with those symbols seem to
>> > be
>> > properly installed, e.g. running
>> >
>> >     pkg-config --cflags --libs --libs libbson-1.0
>> >
>> > returns
>> >
>> >     -I/usr/local/include/libbson-1.0  -L/usr/local/lib -lbson-1.0
>> >
>> > and inside /usr/local/include/libbson-1.0, running "grep -R bson_free ."
>> >
>> > returns:
>> >
>> >     ./bson.h: * Returns: A newly allocated string that should be freed
>> > with
>> > bson_free().
>> >     ./bson-memory.h:void  bson_free           (void   *mem);
>> >
>> >
>> > On Tuesday, August 25, 2015 at 11:51:18 AM UTC-4, tob...@xxxxxxxxx
>> > wrote:
>> >>
>> >> Hi --
>> >>
>> >> Having trouble compiling mdb.     I'm at commit
>> >> e7d018cad20f2a6aa62aea5db088a8e7b05a190d of libbson.   Is that too
>> >> early/late?
>> >>
>> >> To get the make to work at all, I included the following line in all
>> >> the
>> >> compilation steps:
>> >>      -I /usr/local/include/libbson-1.0
>> >>
>> >> Is this right?
>> >>
>> >> Even so, here's the compilation error:
>> >>
>> >> /tmp/ccanf01B.o: In function `db_init':
>> >> /home/brejicz/mdb/mdb.c:177: undefined reference to
>> >> `bson_strdup_printf'
>> >> /home/brejicz/mdb/mdb.c:179: undefined reference to `bson_free'
>> >> /home/brejicz/mdb/mdb.c:182: undefined reference to `bson_free'
>> >> /home/brejicz/mdb/mdb.c:189: undefined reference to
>> >> `bson_strdup_printf'
>> >> /home/brejicz/mdb/mdb.c:191: undefined reference to `bson_free'
>> >> /home/brejicz/mdb/mdb.c:195: undefined reference to `bson_realloc'
>> >> /home/brejicz/mdb/mdb.c:197: undefined reference to `bson_free'
>> >> /home/brejicz/mdb/mdb.c:218: undefined reference to `bson_free'
>> >> /tmp/ccanf01B.o: In function `record_bson':
>> >> /home/brejicz/mdb/mdb.c:608: undefined reference to `bson_init_static'
>> >> /tmp/ccvrjSqo.o: In function `main':
>> >> /home/brejicz/mdb/mdbdump.c:70: undefined reference to `bson_as_json'
>> >> /home/brejicz/mdb/mdbdump.c:73: undefined reference to `bson_free'
>> >> collect2: ld returned 1 exit status
>> >> make: *** [mdbdump] Error 1
>> >>
>> >>
>> >> Sorry for the detailed questions, but any help would be great.
>> >>
>> >> Thanks!,
>> >> T
>> >>
>> >> On Tuesday, August 25, 2015 at 3:04:16 AM UTC-4, Asya Kamsky wrote:
>> >>
>> >> I'd suggest 1.0 or whichever version is tagged around the time of last
>> >> commit on mdb repo.
>> >>
>> >> If you get the wrong version you'll find out during build.
>> >>
>> >> Asya
>> >>
>> >>
>> >> On Mon, Aug 24, 2015 at 6:43 PM,  <tob...@xxxxxxxxx> wrote:
>> >> > Ok great -- thanks, will try this.
>> >> >
>> >> > In terms of getting the right version of libbson, I guess that
>> >> > getting a
>> >> > commit from
>> >> >
>> >> >      https://github.com/mongodb/libbson/commits/
>> >> >
>> >> > here is the the way to go?   Which commit do you suggest I use?
>> >> >
>> >> > Thanks!
>> >> > T
>> >> >
>> >> > On Monday, August 24, 2015 at 11:19:02 AM UTC-4, Asya Kamsky wrote:
>> >> >>
>> >> >> Okay, well, I guess it can't hurt so you should try to see if
>> >> >> anything
>> >> >> dumped out with something that reads the raw files ends up helping
>> >> >> at
>> >> >> all:
>> >> >>
>> >> >> Check out this github repo:
>> >> >>
>> >> >> https://github.com/chergert/mdb
>> >> >>
>> >> >> This was mostly an exercise in dumping out on-disk format (*note* to
>> >> >> anyone reading, this had never been updated since 2.4 so I'm only
>> >> >> suggesting
>> >> >> it specifically because OP is using 2.4 - using this on later
>> >> >> versions
>> >> >> without changes will simply give errors).
>> >> >>
>> >> >> To build this you will need libbson version from when this was
>> >> >> written
>> >> >> which I believe is 1.0.   The executable you want to run is called
>> >> >> mdbundo,
>> >> >> here's its usage.
>> >> >>
>> >> >> usage: mdbundo DBPATH DBNAME COLNAME
>> >> >>
>> >> >> what it will do is walk through the DB files looking for deleted
>> >> >> records
>> >> >> and then try to restore them.   I can't remember for sure but I
>> >> >> think
>> >> >> it
>> >> >> will send them to stdout so you can redirect the output of this to a
>> >> >> file.
>> >> >> That file should have bson in them so you can use bsondump to view
>> >> >> them
>> >> >> (which is pointless for chunks but useful for "files") and you can
>> >> >> run
>> >> >> mongorestore on the output to load it into a new collection so you
>> >> >> can
>> >> >> then
>> >> >> figure out if anything in there is valuable.
>> >> >>
>> >> >> Obviously you run it twice on the same DB files, once for
>> >> >> myfs.files,
>> >> >> once
>> >> >> for myfs.chunks.
>> >> >>
>> >> >> Again, given the output to validate I don't really expect it to find
>> >> >> anything more than a handful of records at most, possibly unusable
>> >> >> (since
>> >> >> for every "file" every chunk belonging to it must be present to have
>> >> >> the
>> >> >> full usable file back.
>> >> >>
>> >> >> Good luck and let us know what you uncover,
>> >> >> Asya
>> >> >>
>> >> >>
>> >> >>
>> >> >>
>> >> >> On Mon, Aug 24, 2015 at 9:48 AM, <tob...@xxxxxxxxx> wrote:
>> >> >>>
>> >> >>> Thanks for the response -- sorry for the delay in reply (was away
>> >> >>> from
>> >> >>> computer).
>> >> >>>
>> >> >>>
>> >> >>> On Saturday, August 22, 2015 at 7:00:54 PM UTC-4, Asya Kamsky
>> >> >>> wrote:
>> >> >>>>
>> >> >>>> This is somewhat surprising.  Are there other collections in this
>> >> >>>> database?
>> >> >>>
>> >> >>>
>> >> >>> Well, not really,  here is the result of ".collection_names()":
>> >> >>>
>> >> >>> [u'system.indexes', u'myfs.chunks', u'myfs.files']
>> >> >>>
>> >> >>>
>> >> >>>>
>> >> >>>> This is the original files, right?
>> >> >>>
>> >> >>>
>> >> >>> Yes.
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>>>
>> >> >>>>
>> >> >>>> Are you *sure* there are files missing?  I don't mean by count but
>> >> >>>> by
>> >> >>>> content... You mentioned you were about to find some strings
>> >> >>>> matching
>> >> >>>> something you expected to find in the DB but didn't?
>> >> >>>
>> >> >>>
>> >> >>> Definitely 100% completely sure.    I can use "grep" on the binary
>> >> >>> files
>> >> >>> of this database and I see at least parts of the missing records,
>> >> >>> e.g.
>> >> >>> some
>> >> >>> keys that are unique and are exactly what the missing records would
>> >> >>> have.
>> >> >>> Moreover, I (and several others) were happily using this collection
>> >> >>> as
>> >> >>> the
>> >> >>> source a stable restful API for months ... at least, until this
>> >> >>> problem
>> >> >>> occurred ...
>> >> >>>
>> >> >>>>
>> >> >>>>
>> >> >>>> How many dbname.ns files are there and what are their sizes?
>> >> >>>
>> >> >>>
>> >> >>> In the original collection there are 90 .ns files, most of which
>> >> >>> are
>> >> >>> the
>> >> >>> standard size "2146435072", but some of which (the early ones of
>> >> >>> course) are
>> >> >>> smaller.
>> >> >>>
>> >> >>> In the repaired collection there are 35 .ns files, same size
>> >> >>> pattern.
>> >> >>>
>> >> >>> Thanks again for all the help --
>> >> >>>
>> >> >>> T
>> >> >>>
>> >> >>>
>> >> >>>>
>> >> >>>>
>> >> >>>> Asya
>> >> >>>>
>> >> >>>>
>> >> >>>> On Friday, August 21, 2015, <tob...@xxxxxxxxx> wrote:
>> >> >>>>>
>> >> >>>>> Asya,
>> >> >>>>>
>> >> >>>>> Thanks for the help.  Ok, here are the answers to your questions:
>> >> >>>>>
>> >> >>>>> 1) The aggregation of the chunk collection produces 279
>> >> >>>>> documents.
>> >> >>>>> I
>> >> >>>>> don't recall how many there were before the ones I want went
>> >> >>>>> missing.
>> >> >>>>> However, I guess there more than 279.  I don't know exactly how
>> >> >>>>> many
>> >> >>>>> more,
>> >> >>>>> but I expect on the order of several hundred at least (but again,
>> >> >>>>> I'm not
>> >> >>>>> sure).    The collection does verify, in that all the chunks in
>> >> >>>>> the
>> >> >>>>> chunks
>> >> >>>>> collection are accounted for by a single file in the files
>> >> >>>>> collection.  I
>> >> >>>>> computed the expected number of chunks for each file by dividing
>> >> >>>>> length by
>> >> >>>>> chunk size for each file record; that number exactly equals the
>> >> >>>>> count of
>> >> >>>>> chunks for that file as produced by the aggregation in the chunk
>> >> >>>>> collection.
>> >> >>>>>
>> >> >>>>> 2) Ok, here's the info from the validate commands:
>> >> >>>>>
>> >> >>>>> db.files.validate(true) produced:
>> >> >>>>>
>> >> >>>>> {
>> >> >>>>>         "ns" : "mydb.myfs.files",
>> >> >>>>>         "firstExtent" : "0:19000 ns:mydb.myfs.files",
>> >> >>>>>         "lastExtent" : "7:67757000 ns:mydb.myfs.files",
>> >> >>>>>         "extentCount" : 5,
>> >> >>>>>
>> >> >>>>>             <snip "extents">
>> >> >>>>>
>> >> >>>>>         "datasize" : 24075568,
>> >> >>>>>         "nrecords" : 279,
>> >> >>>>>         "lastExtentSize" : 12902400,
>> >> >>>>>         "padding" : 1,
>> >> >>>>>
>> >> >>>>>             <snip some other stuff>
>> >> >>>>>
>> >> >>>>>         "objectsFound" : 279,
>> >> >>>>>         "invalidObjects" : 0,
>> >> >>>>>         "bytesWithHeaders" : 24080032,
>> >> >>>>>         "bytesWithoutHeaders" : 24075568,
>> >> >>>>>         "deletedCount" : 3,
>> >> >>>>>         "deletedSize" : 12008944,
>> >> >>>>>         "nIndexes" : 3,
>> >> >>>>>         "keysPerIndex" : {
>> >> >>>>>                 "mydb.myfs.files.$_id_" : 279,
>> >> >>>>>                 "mydb.myfs.files.$filename_1_uploadDate_-1" :
>> >> >>>>> 279,
>> >> >>>>>                 "mydb.myfs.files.$timestamp_1" : 279
>> >> >>>>>         },
>> >> >>>>>         "valid" : true,
>> >> >>>>>         "errors" : [ ],
>> >> >>>>>         "ok" : 1
>> >> >>>>> }
>> >> >>>>>
>> >> >>>>> db.chunks.validate(true) produced:
>> >> >>>>>
>> >> >>>>> db.chunks.validate(true)
>> >> >>>>> {
>> >> >>>>>         "ns" : "mydb.myfs.chunks",
>> >> >>>>>         "firstExtent" : "0:5000 ns:mydb.myfs.chunks",
>> >> >>>>>         "lastExtent" : "28:2000 ns:mydb.myfs.chunks",
>> >> >>>>>         "extentCount" : 43,
>> >> >>>>>
>> >> >>>>>             <snip "extents">
>> >> >>>>>
>> >> >>>>>         "datasize" : 49854830688,
>> >> >>>>>         "nrecords" : 190427,
>> >> >>>>>         "lastExtentSize" : 2146426864,
>> >> >>>>>         "padding" : 1,
>> >> >>>>>         "firstExtentDetails" : {
>> >> >>>>>                 "loc" : "0:5000",
>> >> >>>>>                 "xnext" : "0:162000",
>> >> >>>>>                 "xprev" : "null",
>> >> >>>>>                 "nsdiag" : "mydb.myfs.chunks",
>> >> >>>>>                 "size" : 8192,
>> >> >>>>>                 "firstRecord" : "0:50b0",
>> >> >>>>>                 "lastRecord" : "0:6f28"
>> >> >>>>>         },
>> >> >>>>>         "lastExtentDetails" : {
>> >> >>>>>                 "loc" : "28:2000",
>> >> >>>>>                 "xnext" : "null",
>> >> >>>>>                 "xprev" : "27:2000",
>> >> >>>>>                 "nsdiag" : "mydb.myfs.chunks",
>> >> >>>>>                 "size" : 2146426864,
>> >> >>>>>                 "firstRecord" : "28:20b0",
>> >> >>>>>                 "lastRecord" : "28:189c20b0"
>> >> >>>>>         },
>> >> >>>>>         "objectsFound" : 190427,
>> >> >>>>>         "invalidObjects" : 0,
>> >> >>>>>         "bytesWithHeaders" : 49857877520,
>> >> >>>>>         "bytesWithoutHeaders" : 49854830688,
>> >> >>>>>         "deletedCount" : 20,
>> >> >>>>>         "deletedSize" : 1733626640,
>> >> >>>>>         "nIndexes" : 2,
>> >> >>>>>         "keysPerIndex" : {
>> >> >>>>>                 "mydb.myfs.chunks.$_id_" : 190427,
>> >> >>>>>                 "mydb.myfs.chunks.$files_id_1_n_1" : 190427
>> >> >>>>>         },
>> >> >>>>>         "valid" : true,
>> >> >>>>>         "errors" : [ ],
>> >> >>>>>         "ok" : 1
>> >> >>>>> }
>> >> >>>>>
>> >> >>>>> I'm not sure whether "deletedCount" of "3" in the files
>> >> >>>>> collection
>> >> >>>>> would make up for all the deleted records that I expect ... if I
>> >> >>>>> interpret
>> >> >>>>> it correctly.   It would also not make up for the huge compaction
>> >> >>>>> that
>> >> >>>>> occurred in the repaired version of the collection (which is much
>> >> >>>>> smaller on
>> >> >>>>> disk, but still contains 279 records).
>> >> >>>>>
>> >> >>>>> Any thoughts on what to do next?   Thanks again for your help!
>> >> >>>>>
>> >> >>>>> T
>> >> >>>>>
>> >> >>>>>
>> >> >>>>> On Friday, August 21, 2015 at 2:02:27 PM UTC-4, Asya Kamsky
>> >> >>>>> wrote:
>> >> >>>>>>
>> >> >>>>>> I'm assuming that you start up mongod with the original files
>> >> >>>>>> (ones
>> >> >>>>>> you did not repair but saved when you first noticed that
>> >> >>>>>> something
>> >> >>>>>> was
>> >> >>>>>> amiss):
>> >> >>>>>>
>> >> >>>>>> Okay, first, it would be good to know which collections are
>> >> >>>>>> missing
>> >> >>>>>> documents:
>> >> >>>>>>
>> >> >>>>>> You'd mentioned having 279 documents in files collection - how
>> >> >>>>>> many
>> >> >>>>>> are in chunks collection?   Do they "cross-verify"?
>> >> >>>>>>
>> >> >>>>>> You can see what each has here:
>> >> >>>>>> http://docs.mongodb.org/manual/reference/gridfs/
>> >> >>>>>>
>> >> >>>>>> So you can tell for each file how many chunk documents there
>> >> >>>>>> should
>> >> >>>>>> be
>> >> >>>>>> and for each chunk document you can tell which file it belongs
>> >> >>>>>> to
>> >> >>>>>> and in
>> >> >>>>>> what order.
>> >> >>>>>>
>> >> >>>>>> Some possible ways to verify this:
>> >> >>>>>> db.fs.chunks.aggregate({$sort:{files_id:1, num:1}},
>> >> >>>>>> {$group:{_id:"$files_id",count:{$sum:1},
>> >> >>>>>> chunks:{$push:"$num"}}}).
>> >> >>>>>>
>> >> >>>>>> If you have 279 files in your fs.files collection, then you
>> >> >>>>>> should
>> >> >>>>>> get
>> >> >>>>>> back 279 documents from this aggregation, each corresponding to
>> >> >>>>>> a
>> >> >>>>>> "file" in
>> >> >>>>>> gridFS.
>> >> >>>>>>
>> >> >>>>>> How many files do you expect to have?
>> >> >>>>>>
>> >> >>>>>> You can now run this command (very slow):
>> >> >>>>>>
>> >> >>>>>> db.fs.files.validate(true)
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>> and
>> >> >>>>>> db.fs.chunks.validate(true)
>> >> >>>>>>
>> >> >>>>>> This will give output including fields described here:
>> >> >>>>>>
>> >> >>>>>> http://docs.mongodb.org/manual/reference/command/validate/#output
>> >> >>>>>>
>> >> >>>>>> You are interested in the deleted records
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>> (http://docs.mongodb.org/manual/reference/command/validate/#validate.deletedCount
>> >> >>>>>> and the next field, deletedSize).
>> >> >>>>>> Basically these are the records that can be potentially
>> >> >>>>>> recovered -
>> >> >>>>>> can you provide output to above experiments so we don't waste
>> >> >>>>>> time
>> >> >>>>>> trying to
>> >> >>>>>> recover data that's not still there?
>> >> >>>>>>
>> >> >>>>>> Asya
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>> On Fri, Aug 21, 2015 at 11:33 AM, <tob...@xxxxxxxxx> wrote:
>> >> >>>>>>>
>> >> >>>>>>> Great thanks so much for the help!
>> >> >>>>>>> T
>> >> >>>>>>>
>> >> >>>>>>> On Friday, August 21, 2015 at 9:49:36 AM UTC-4, Asya Kamsky
>> >> >>>>>>> wrote:
>> >> >>>>>>>>
>> >> >>>>>>>> Some of the records are in a free list and haven't been
>> >> >>>>>>>> overwritten
>> >> >>>>>>>> yet.  It may be possible to get them back.
>> >> >>>>>>>>
>> >> >>>>>>>> Since you're using version 2.4.x I believe that there is code
>> >> >>>>>>>> that
>> >> >>>>>>>> will walk through the DB files and dump out every document it
>> >> >>>>>>>> finds there.
>> >> >>>>>>>>
>> >> >>>>>>>> Let me dig around for it - obviously there is (a) no guarantee
>> >> >>>>>>>> that
>> >> >>>>>>>> this will restore all of them and (b) did you say these were
>> >> >>>>>>>> GridFS chunks -
>> >> >>>>>>>> if so then you won't get back a valid file unless all chunks
>> >> >>>>>>>> from
>> >> >>>>>>>> that file
>> >> >>>>>>>> are restored (along with corresponding files record).
>> >> >>>>>>>>
>> >> >>>>>>>> Asya
>> >> >>>>>>>>
>> >> >>>>>>>>
>> >> >>>>>>>> On Wednesday, August 19, 2015, <tob...@xxxxxxxxx> wrote:
>> >> >>>>>>>>>
>> >> >>>>>>>>> Thanks for the suggestion.   The logs don't appear to show
>> >> >>>>>>>>> any
>> >> >>>>>>>>> removals on that database since Oct. 2014.    At least as far
>> >> >>>>>>>>> as
>> >> >>>>>>>>> these logs
>> >> >>>>>>>>> go, it doesn't look to me like a remove was done.
>> >> >>>>>>>>>
>> >> >>>>>>>>> Any suggestions for what to do now?   I can definitely see
>> >> >>>>>>>>> traces
>> >> >>>>>>>>> of the data I want to get at in the (unmodified) database
>> >> >>>>>>>>> files.
>> >> >>>>>>>>> Is there
>> >> >>>>>>>>> no way to get at the data in them?
>> >> >>>>>>>>>
>> >> >>>>>>>>> Thanks!
>> >> >>>>>>>>> T
>> >> >>>>>>>>>
>> >> >>>>>>>>>
>> >> >>>>>>>>> On Wednesday, August 19, 2015 at 2:53:46 PM UTC-4, Asya
>> >> >>>>>>>>> Kamsky
>> >> >>>>>>>>> wrote:
>> >> >>>>>>>>>>
>> >> >>>>>>>>>> The only CRUD operations that would be logged would be those
>> >> >>>>>>>>>> that
>> >> >>>>>>>>>> take longer than 100ms.
>> >> >>>>>>>>>>
>> >> >>>>>>>>>> Might look something like this:
>> >> >>>>>>>>>>
>> >> >>>>>>>>>> 2015-07-14T12:10:34.559-0500 I WRITE    [conn4822] remove
>> >> >>>>>>>>>> blackbox.bigcoll query: { _id: { $gte: 10000.0 } }
>> >> >>>>>>>>>> ndeleted:90000
>> >> >>>>>>>>>> keyUpdates:0 writeConflicts:0 numYields:12330 locks:{
>> >> >>>>>>>>>> Global: {
>> >> >>>>>>>>>> acquireCount: { r: 12331, w: 12331 } }, Database: {
>> >> >>>>>>>>>> acquireCount: { w: 12331
>> >> >>>>>>>>>> } }, Collection: { acquireCount: { w: 12331 } } } 249168ms
>> >> >>>>>>>>>>
>> >> >>>>>>>>>> Of course this was a huge mass remove of giant docs which is
>> >> >>>>>>>>>> why
>> >> >>>>>>>>>> it took so long.   If the number of documents deleted was
>> >> >>>>>>>>>> small
>> >> >>>>>>>>>> (like if
>> >> >>>>>>>>>> they were deleted one at a time instead of via single
>> >> >>>>>>>>>> command)
>> >> >>>>>>>>>> then it's
>> >> >>>>>>>>>> much less likely they would have taken longer than 100ms.
>> >> >>>>>>>>>>
>> >> >>>>>>>>>> I would suggest searching through the logs with equivalent
>> >> >>>>>>>>>> command
>> >> >>>>>>>>>> to:
>> >> >>>>>>>>>>
>> >> >>>>>>>>>> grep 'remove <dbname>' *.log.*
>> >> >>>>>>>>>>
>> >> >>>>>>>>>> where you would replace <dbname> with the name of your
>> >> >>>>>>>>>> actual
>> >> >>>>>>>>>> DB
>> >> >>>>>>>>>> and replace *.log.* with path to your log files.
>> >> >>>>>>>>>>
>> >> >>>>>>>>>> Asya
>> >> >>>>>>>>>>
>> >> >>>>>>>>>>
>> >> >>>>>>>>>> On Tue, Aug 18, 2015 at 1:56 PM, <tob...@xxxxxxxxx> wrote:
>> >> >>>>>>>>>>>
>> >> >>>>>>>>>>>
>> >> >>>>>>>>>>>
>> >> >>>>>>>>>>> On Tuesday, August 18, 2015 at 1:53:13 PM UTC-4, Asya
>> >> >>>>>>>>>>> Kamsky
>> >> >>>>>>>>>>> wrote:
>> >> >>>>>>>>>>>>
>> >> >>>>>>>>>>>> Do you have the logs from around the time the database
>> >> >>>>>>>>>>>> crashed?
>> >> >>>>>>>>>>>> When was that relative to when you noticed the records
>> >> >>>>>>>>>>>> missing?
>> >> >>>>>>>>>>>>
>> >> >>>>>>>>>>>
>> >> >>>>>>>>>>> Yes I have all the logs going back for a year.   I noticed
>> >> >>>>>>>>>>> the
>> >> >>>>>>>>>>> records missing the day that I wrote the help question
>> >> >>>>>>>>>>> originally (several
>> >> >>>>>>>>>>> days ago).   The crash happened several weeks ago.
>> >> >>>>>>>>>>>
>> >> >>>>>>>>>>>
>> >> >>>>>>>>>>>> Since you had journaling on, and you were able to restart
>> >> >>>>>>>>>>>> mongod
>> >> >>>>>>>>>>>> without any errors, I would rule out the crash *causing*
>> >> >>>>>>>>>>>> records to
>> >> >>>>>>>>>>>> disappear, which leaves you with the possibility that they
>> >> >>>>>>>>>>>> were deleted.
>> >> >>>>>>>>>>>> If they were deleted and you don't have any backups from
>> >> >>>>>>>>>>>> *before* they were
>> >> >>>>>>>>>>>> deleted, then there's not much that can be done to recover
>> >> >>>>>>>>>>>> the data - sure,
>> >> >>>>>>>>>>>> it's possible to write code to literally look through the
>> >> >>>>>>>>>>>> deleted list, but
>> >> >>>>>>>>>>>> if any data was inserted after the deletes then that space
>> >> >>>>>>>>>>>> would be reused
>> >> >>>>>>>>>>>> and the old data would be overwritten forever.   :(
>> >> >>>>>>>>>>>
>> >> >>>>>>>>>>>
>> >> >>>>>>>>>>> The backups are not from before unfortunately.   But
>> >> >>>>>>>>>>> wouldn't
>> >> >>>>>>>>>>> there be some record in the logs if there was a deleation?
>> >> >>>>>>>>>>>
>> >> >>>>>>>>>>> We haven't written any records since the deletes.   (The
>> >> >>>>>>>>>>> files
>> >> >>>>>>>>>>> haven't changed for months.)
>> >> >>>>>>>>>>>
>> >> >>>>>>>>>>>
>> >> >>>>>>>>>>>>
>> >> >>>>>>>>>>>>
>> >> >>>>>>>>>>>> Asya
>> >> >>>>>>>>>>>>
>> >> >>>>>>>>>>>>
>> >> >>>>>>>>>>>> On Tue, Aug 18, 2015 at 12:54 PM, <tob...@xxxxxxxxx>
>> >> >>>>>>>>>>>> wrote:
>> >> >>>>>>>>>>>>>
>> >> >>>>>>>>>>>>> Hi Ankit --
>> >> >>>>>>>>>>>>>
>> >> >>>>>>>>>>>>> Thanks so much for your reply -- it's really appreciated.
>> >> >>>>>>>>>>>>>
>> >> >>>>>>>>>>>>> Here are some answers to your questions:
>> >> >>>>>>>>>>>>>
>> >> >>>>>>>>>>>>> 1) it's a standalone deployment, with no replica set (I'm
>> >> >>>>>>>>>>>>> using
>> >> >>>>>>>>>>>>> journaling though)
>> >> >>>>>>>>>>>>>
>> >> >>>>>>>>>>>>> 2) version is 2.4.6
>> >> >>>>>>>>>>>>>
>> >> >>>>>>>>>>>>> 3) Here is the output of explain (done through pymongo):
>> >> >>>>>>>>>>>>>
>> >> >>>>>>>>>>>>> {u'allPlans': [{u'cursor': u'BasicCursor',
>> >> >>>>>>>>>>>>>    u'indexBounds': {},
>> >> >>>>>>>>>>>>>    u'n': 0,
>> >> >>>>>>>>>>>>>    u'nscanned': 279,
>> >> >>>>>>>>>>>>>    u'nscannedObjects': 279}],
>> >> >>>>>>>>>>>>>  u'cursor': u'BasicCursor',
>> >> >>>>>>>>>>>>>  u'indexBounds': {},
>> >> >>>>>>>>>>>>>  u'indexOnly': False,
>> >> >>>>>>>>>>>>>  u'isMultiKey': False,
>> >> >>>>>>>>>>>>>  u'millis': 176,
>> >> >>>>>>>>>>>>>  u'n': 0,
>> >> >>>>>>>>>>>>>  u'nChunkSkips': 0,
>> >> >>>>>>>>>>>>>  u'nYields': 1,
>> >> >>>>>>>>>>>>>  u'nscanned': 279,
>> >> >>>>>>>>>>>>>  u'nscannedAllPlans': 279,
>> >> >>>>>>>>>>>>>  u'nscannedObjects': 279,
>> >> >>>>>>>>>>>>>  u'nscannedObjectsAllPlans': 279,
>> >> >>>>>>>>>>>>>  u'scanAndOrder': False,
>> >> >>>>>>>>>>>>>  u'server': myserver@myport}
>> >> >>>>>>>>>>>>>
>> >> >>>>>>>>>>>>> (I think is saying 279 here because that is the number of
>> >> >>>>>>>>>>>>> records shown -- not 250, like I said before, that was a
>> >> >>>>>>>>>>>>> mistake.)
>> >> >>>>>>>>>>>>>
>> >> >>>>>>>>>>>>> 4) Here is the index information (also through pymongo):
>> >> >>>>>>>>>>>>>
>> >> >>>>>>>>>>>>> {u'_id_': {u'key': [(u'_id', 1)], u'v': 1},
>> >> >>>>>>>>>>>>>  u'filename_1_uploadDate_-1': {u'key': [(u'filename', 1),
>> >> >>>>>>>>>>>>> (u'uploadDate', -1)],
>> >> >>>>>>>>>>>>>   u'v': 1},
>> >> >>>>>>>>>>>>>  u'timestamp_1': {u'key': [(u'timestamp', 1)], u'v': 1}}
>> >> >>>>>>>>>>>>>
>> >> >>>>>>>>>>>>> As you can see, this is from a ".files" collection used
>> >> >>>>>>>>>>>>> to
>> >> >>>>>>>>>>>>> manage a gridfs instance.
>> >> >>>>>>>>>>>>>
>> >> >>>>>>>>>>>>>
>> >> >>>>>>>>>>>>> Also I think I have some additional information.   I
>> >> >>>>>>>>>>>>> copied
>> >> >>>>>>>>>>>>> the
>> >> >>>>>>>>>>>>> database files to a new location and ran a full repair on
>> >> >>>>>>>>>>>>> the whole
>> >> >>>>>>>>>>>>> database.   IN the new repaired version, there were many
>> >> >>>>>>>>>>>>> fewer files .ns
>> >> >>>>>>>>>>>>> files.   As is as if some records were removed using e.g.
>> >> >>>>>>>>>>>>> gridfs.GridFS.remove ... so in the new (compacted)
>> >> >>>>>>>>>>>>> version
>> >> >>>>>>>>>>>>> the data was
>> >> >>>>>>>>>>>>> finally deleted.  Maybe its somehow possible that the
>> >> >>>>>>>>>>>>> records were removed
>> >> >>>>>>>>>>>>> from the gridfs collection by someone running a remove
>> >> >>>>>>>>>>>>> operation?   I don't
>> >> >>>>>>>>>>>>> see any evidence of such an operation in the logs but
>> >> >>>>>>>>>>>>> perhaps it's possible?
>> >> >>>>>>>>>>>>> I've looked carefully at the logs and am not seeing
>> >> >>>>>>>>>>>>> anything
>> >> >>>>>>>>>>>>> obvious, but
>> >> >>>>>>>>>>>>> maybe dont' know what to look for.
>> >> >>>>>>>>>>>>>
>> >> >>>>>>>>>>>>> Of course, that I guess does not modify the underlying
>> >> >>>>>>>>>>>>> files
>> >> >>>>>>>>>>>>> on
>> >> >>>>>>>>>>>>> the original collection where the (hypothetical) remove
>> >> >>>>>>>>>>>>> would have been run.
>> >> >>>>>>>>>>>>> If this indeed the case, is there some way to get the
>> >> >>>>>>>>>>>>> data
>> >> >>>>>>>>>>>>> back?  After all,
>> >> >>>>>>>>>>>>> I do have the original .ns files.
>> >> >>>>>>>>>>>>>
>> >> >>>>>>>>>>>>> Thanks!
>> >> >>>>>>>>>>>>> Tob
>> >> >>>>>>>>>>>>>
>> >> >>>>>>>>>>>>>
>> >> >>>>>>>>>>>>>
>> >> >>>>>>>>>>>>>
>> >> >>>>>>>>>>>>> On Monday, August 17, 2015 at 12:47:08 PM UTC-4, Ankit
>> >> >>>>>>>>>>>>> Kakkar
>> >> >>>>>>>>>>>>> wrote:
>> >> >>>>>>>>>>>>>>
>> >> >>>>>>>>>>>>>> Hello Tobjan,
>> >> >>>>>>>>>>>>>>
>> >> >>>>>>>>>>>>>> Thanks for reaching out to us. To assist us in the
>> >> >>>>>>>>>>>>>> investigation of this case, please provide us with
>> >> >>>>>>>>>>>>>> following information:
>> >> >>>>>>>>>>>>>>
>> >> >>>>>>>>>>>>>> 1) Describe your MongoDB deployment (standalone, replica
>> >> >>>>>>>>>>>>>> set,
>> >> >>>>>>>>>>>>>> or sharded cluster)
>> >> >>>>>>>>>>>>>> 2) What version of MongoDB are you running? (You can
>> >> >>>>>>>>>>>>>> check
>> >> >>>>>>>>>>>>>> it
>> >> >>>>>>>>>>>>>> with db.version() in the mongo shell)
>> >> >>>>>>>>>>>>>> 3) Which query did you use to find or count the
>> >> >>>>>>>>>>>>>> documents?
>> >> >>>>>>>>>>>>>> Can
>> >> >>>>>>>>>>>>>> you please run that query with explain() and send us the
>> >> >>>>>>>>>>>>>> output?
>> >> >>>>>>>>>>>>>> 4) Output of db.collection.getIndexes() for the
>> >> >>>>>>>>>>>>>> collection
>> >> >>>>>>>>>>>>>> where documents appear to be missing.
>> >> >>>>>>>>>>>>>>
>> >> >>>>>>>>>>>>>> Regards,
>> >> >>>>>>>>>>>>>> ankit
>> >> >>>>>>>>>>>>>>
>> >> >>>>>>>>>>>>>> On Monday, August 17, 2015 at 12:56:20 PM UTC+5:30,
>> >> >>>>>>>>>>>>>> Chris
>> >> >>>>>>>>>>>>>> De
>> >> >>>>>>>>>>>>>> Bruyne wrote:
>> >> >>>>>>>>>>>>>>>
>> >> >>>>>>>>>>>>>>> Can you give some more info?
>> >> >>>>>>>>>>>>>>>
>> >> >>>>>>>>>>>>>>> Like which query are you doing to find the documents,
>> >> >>>>>>>>>>>>>>> what
>> >> >>>>>>>>>>>>>>> is
>> >> >>>>>>>>>>>>>>> the structure of the docs, are there indexes on this
>> >> >>>>>>>>>>>>>>> collection?
>> >> >>>>>>>>>>>>>>>
>> >> >>>>>>>>>>>>>>>
>> >> >>>>>>>>>>>>>>> On Sunday, August 16, 2015 at 12:33:16 AM UTC+2,
>> >> >>>>>>>>>>>>>>> tob...@xxxxxxxxx wrote:
>> >> >>>>>>>>>>>>>>>>
>> >> >>>>>>>>>>>>>>>> I have a mongodb database containing a collection that
>> >> >>>>>>>>>>>>>>>> has
>> >> >>>>>>>>>>>>>>>> remained unchanged for a period of time -- the
>> >> >>>>>>>>>>>>>>>> underlying
>> >> >>>>>>>>>>>>>>>> .ns and .0, .1 etc
>> >> >>>>>>>>>>>>>>>> files have not been modified for months. Up until a
>> >> >>>>>>>>>>>>>>>> few
>> >> >>>>>>>>>>>>>>>> weeks ago, I had no
>> >> >>>>>>>>>>>>>>>> problem reading records from the collection. There
>> >> >>>>>>>>>>>>>>>> were
>> >> >>>>>>>>>>>>>>>> thousands of records
>> >> >>>>>>>>>>>>>>>> in the collection.
>> >> >>>>>>>>>>>>>>>>
>> >> >>>>>>>>>>>>>>>> However, today when I attempted to read the records,
>> >> >>>>>>>>>>>>>>>> many
>> >> >>>>>>>>>>>>>>>> of
>> >> >>>>>>>>>>>>>>>> them appeared missing -- e.g. records that I expected
>> >> >>>>>>>>>>>>>>>> to
>> >> >>>>>>>>>>>>>>>> be there were not,
>> >> >>>>>>>>>>>>>>>> although some of the records were available. Now,
>> >> >>>>>>>>>>>>>>>> there
>> >> >>>>>>>>>>>>>>>> appear to be only
>> >> >>>>>>>>>>>>>>>> 250 records. =
>> >> >>>>>>>>>>>>>>>>
>> >> >>>>>>>>>>>>>>>> I copied the database files and did a repair of the
>> >> >>>>>>>>>>>>>>>> (copied)
>> >> >>>>>>>>>>
>> >>
>> >> ...
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> > Groups
>> > "mongodb-user"
>> > group.
>> >
>> > For other MongoDB technical support options, see:
>> > http://www.mongodb.org/about/support/.
>> > ---
>> > You received this message because you are subscribed to the Google
>> > Groups
>> > "mongodb-user" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> > an
>> > email to mongodb-user...@xxxxxxxxxxxxxxxx.
>> > To post to this group, send email to mongod...@xxxxxxxxxxxxxxxx.
>> > Visit this group at http://groups.google.com/group/mongodb-user.
>> > To view this discussion on the web visit
>> >
>> > https://groups.google.com/d/msgid/mongodb-user/ea02d25c-7c17-47f1-a443-c80956fbc5cd%40googlegroups.com.
>> >
>> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "mongodb-user"
> group.
>
> For other MongoDB technical support options, see:
> http://www.mongodb.org/about/support/.
> ---
> You received this message because you are subscribed to the Google Groups
> "mongodb-user" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to mongodb-user+unsubscribe@xxxxxxxxxxxxxxxx.
> To post to this group, send email to mongodb-user@xxxxxxxxxxxxxxxx.
> Visit this group at http://groups.google.com/group/mongodb-user.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/mongodb-user/6f2898d9-6d00-4ff7-af8e-4d4abab4b523%40googlegroups.com.
>
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.

For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
--- 
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe@xxxxxxxxxxxxxxxx.
To post to this group, send email to mongodb-user@xxxxxxxxxxxxxxxx.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAOe6dJDTQ7N7sXLnHv8HqcCKkxZ%3DwMBhMVnM34A%3DMfzS9bLYKg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

<Anterior por Tema] Tema Actual [Siguiente por Tema>
  • Re: [mongodb-Usuario] mongodb los registros viejos que pierden, Asya Kamsky <=